00:00:00.001 Started by upstream project "autotest-per-patch" build number 126218 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.196 Using shallow fetch with depth 1 00:00:00.196 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.196 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.269 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.269 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.967 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.977 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.989 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.989 > git config core.sparsecheckout # timeout=10 00:00:03.999 > git read-tree -mu HEAD # timeout=10 00:00:04.018 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.041 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.041 > git rev-list --no-walk 5fe533b64b2bcae2206a8f61fddcc62257280cde # timeout=10 00:00:04.139 [Pipeline] Start of Pipeline 00:00:04.150 [Pipeline] library 00:00:04.151 Loading library shm_lib@master 00:00:04.151 Library shm_lib@master is cached. Copying from home. 00:00:04.171 [Pipeline] node 00:00:04.180 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.181 [Pipeline] { 00:00:04.194 [Pipeline] catchError 00:00:04.195 [Pipeline] { 00:00:04.207 [Pipeline] wrap 00:00:04.216 [Pipeline] { 00:00:04.228 [Pipeline] stage 00:00:04.230 [Pipeline] { (Prologue) 00:00:04.403 [Pipeline] sh 00:00:04.682 + logger -p user.info -t JENKINS-CI 00:00:04.697 [Pipeline] echo 00:00:04.698 Node: WFP8 00:00:04.705 [Pipeline] sh 00:00:05.000 [Pipeline] setCustomBuildProperty 00:00:05.011 [Pipeline] echo 00:00:05.013 Cleanup processes 00:00:05.017 [Pipeline] sh 00:00:05.294 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.295 800849 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.307 [Pipeline] sh 00:00:05.589 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.589 ++ grep -v 'sudo pgrep' 00:00:05.589 ++ awk '{print $1}' 00:00:05.590 + sudo kill -9 00:00:05.590 + true 00:00:05.603 [Pipeline] cleanWs 00:00:05.613 [WS-CLEANUP] Deleting project workspace... 00:00:05.613 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.619 [WS-CLEANUP] done 00:00:05.624 [Pipeline] setCustomBuildProperty 00:00:05.641 [Pipeline] sh 00:00:05.924 + sudo git config --global --replace-all safe.directory '*' 00:00:05.988 [Pipeline] httpRequest 00:00:06.008 [Pipeline] echo 00:00:06.010 Sorcerer 10.211.164.101 is alive 00:00:06.016 [Pipeline] httpRequest 00:00:06.019 HttpMethod: GET 00:00:06.020 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.020 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.023 Response Code: HTTP/1.1 200 OK 00:00:06.023 Success: Status code 200 is in the accepted range: 200,404 00:00:06.024 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.003 [Pipeline] sh 00:00:07.323 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.339 [Pipeline] httpRequest 00:00:07.357 [Pipeline] echo 00:00:07.359 Sorcerer 10.211.164.101 is alive 00:00:07.367 [Pipeline] httpRequest 00:00:07.371 HttpMethod: GET 00:00:07.371 URL: http://10.211.164.101/packages/spdk_abb6b4c21d65f7ff0102143a8b88e772245ea018.tar.gz 00:00:07.372 Sending request to url: http://10.211.164.101/packages/spdk_abb6b4c21d65f7ff0102143a8b88e772245ea018.tar.gz 00:00:07.381 Response Code: HTTP/1.1 200 OK 00:00:07.382 Success: Status code 200 is in the accepted range: 200,404 00:00:07.382 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_abb6b4c21d65f7ff0102143a8b88e772245ea018.tar.gz 00:00:37.807 [Pipeline] sh 00:00:38.091 + tar --no-same-owner -xf spdk_abb6b4c21d65f7ff0102143a8b88e772245ea018.tar.gz 00:00:40.633 [Pipeline] sh 00:00:40.914 + git -C spdk log --oneline -n5 00:00:40.914 abb6b4c21 pkgdep/git: Lock QAT's no-vmlinux.patch to proper make version 00:00:40.914 cb8c5f3fe pkgdep/git: Add extra libnl-genl dev package to QAT's dependencies 00:00:40.914 719d03c6a sock/uring: only register net impl if supported 00:00:40.914 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:40.914 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:40.925 [Pipeline] } 00:00:40.944 [Pipeline] // stage 00:00:40.955 [Pipeline] stage 00:00:40.957 [Pipeline] { (Prepare) 00:00:40.981 [Pipeline] writeFile 00:00:41.002 [Pipeline] sh 00:00:41.283 + logger -p user.info -t JENKINS-CI 00:00:41.295 [Pipeline] sh 00:00:41.578 + logger -p user.info -t JENKINS-CI 00:00:41.593 [Pipeline] sh 00:00:41.876 + cat autorun-spdk.conf 00:00:41.876 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.876 SPDK_TEST_NVMF=1 00:00:41.876 SPDK_TEST_NVME_CLI=1 00:00:41.876 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.876 SPDK_TEST_NVMF_NICS=e810 00:00:41.876 SPDK_TEST_VFIOUSER=1 00:00:41.876 SPDK_RUN_UBSAN=1 00:00:41.876 NET_TYPE=phy 00:00:41.884 RUN_NIGHTLY=0 00:00:41.890 [Pipeline] readFile 00:00:41.923 [Pipeline] withEnv 00:00:41.925 [Pipeline] { 00:00:41.941 [Pipeline] sh 00:00:42.226 + set -ex 00:00:42.226 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:42.226 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.226 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.226 ++ SPDK_TEST_NVMF=1 00:00:42.226 ++ SPDK_TEST_NVME_CLI=1 00:00:42.226 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.226 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.226 ++ SPDK_TEST_VFIOUSER=1 00:00:42.226 ++ SPDK_RUN_UBSAN=1 00:00:42.226 ++ NET_TYPE=phy 00:00:42.226 ++ RUN_NIGHTLY=0 00:00:42.226 + case $SPDK_TEST_NVMF_NICS in 00:00:42.226 + DRIVERS=ice 00:00:42.226 + [[ tcp == \r\d\m\a ]] 00:00:42.226 + [[ -n ice ]] 00:00:42.226 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:42.226 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:42.226 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:42.226 rmmod: ERROR: Module irdma is not currently loaded 00:00:42.226 rmmod: ERROR: Module i40iw is not currently loaded 00:00:42.226 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:42.226 + true 00:00:42.226 + for D in $DRIVERS 00:00:42.226 + sudo modprobe ice 00:00:42.226 + exit 0 00:00:42.236 [Pipeline] } 00:00:42.255 [Pipeline] // withEnv 00:00:42.261 [Pipeline] } 00:00:42.278 [Pipeline] // stage 00:00:42.288 [Pipeline] catchError 00:00:42.290 [Pipeline] { 00:00:42.306 [Pipeline] timeout 00:00:42.306 Timeout set to expire in 50 min 00:00:42.308 [Pipeline] { 00:00:42.323 [Pipeline] stage 00:00:42.326 [Pipeline] { (Tests) 00:00:42.342 [Pipeline] sh 00:00:42.628 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.628 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.628 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.628 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:42.628 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.628 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:42.628 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:42.628 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:42.628 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:42.628 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:42.628 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:42.628 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.628 + source /etc/os-release 00:00:42.628 ++ NAME='Fedora Linux' 00:00:42.628 ++ VERSION='38 (Cloud Edition)' 00:00:42.628 ++ ID=fedora 00:00:42.628 ++ VERSION_ID=38 00:00:42.628 ++ VERSION_CODENAME= 00:00:42.628 ++ PLATFORM_ID=platform:f38 00:00:42.628 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:42.628 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:42.628 ++ LOGO=fedora-logo-icon 00:00:42.628 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:42.628 ++ HOME_URL=https://fedoraproject.org/ 00:00:42.628 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:42.628 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:42.628 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:42.628 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:42.628 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:42.628 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:42.628 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:42.628 ++ SUPPORT_END=2024-05-14 00:00:42.628 ++ VARIANT='Cloud Edition' 00:00:42.628 ++ VARIANT_ID=cloud 00:00:42.628 + uname -a 00:00:42.628 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:42.628 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:44.556 Hugepages 00:00:44.556 node hugesize free / total 00:00:44.556 node0 1048576kB 0 / 0 00:00:44.556 node0 2048kB 0 / 0 00:00:44.556 node1 1048576kB 0 / 0 00:00:44.556 node1 2048kB 0 / 0 00:00:44.556 00:00:44.556 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:44.556 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:44.556 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:44.556 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:44.556 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:44.556 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:44.556 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:44.556 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:44.556 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:44.816 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:44.816 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:44.816 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:44.816 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:44.816 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:44.816 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:44.816 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:44.816 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:44.816 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:44.816 + rm -f /tmp/spdk-ld-path 00:00:44.816 + source autorun-spdk.conf 00:00:44.816 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.816 ++ SPDK_TEST_NVMF=1 00:00:44.816 ++ SPDK_TEST_NVME_CLI=1 00:00:44.816 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.816 ++ SPDK_TEST_NVMF_NICS=e810 00:00:44.816 ++ SPDK_TEST_VFIOUSER=1 00:00:44.816 ++ SPDK_RUN_UBSAN=1 00:00:44.816 ++ NET_TYPE=phy 00:00:44.816 ++ RUN_NIGHTLY=0 00:00:44.816 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:44.816 + [[ -n '' ]] 00:00:44.816 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:44.817 + for M in /var/spdk/build-*-manifest.txt 00:00:44.817 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:44.817 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.817 + for M in /var/spdk/build-*-manifest.txt 00:00:44.817 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:44.817 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:44.817 ++ uname 00:00:44.817 + [[ Linux == \L\i\n\u\x ]] 00:00:44.817 + sudo dmesg -T 00:00:44.817 + sudo dmesg --clear 00:00:44.817 + dmesg_pid=801781 00:00:44.817 + [[ Fedora Linux == FreeBSD ]] 00:00:44.817 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.817 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.817 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:44.817 + [[ -x /usr/src/fio-static/fio ]] 00:00:44.817 + export FIO_BIN=/usr/src/fio-static/fio 00:00:44.817 + FIO_BIN=/usr/src/fio-static/fio 00:00:44.817 + sudo dmesg -Tw 00:00:44.817 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:44.817 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:44.817 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:44.817 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.817 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.817 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:44.817 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.817 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.817 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.817 Test configuration: 00:00:44.817 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.817 SPDK_TEST_NVMF=1 00:00:44.817 SPDK_TEST_NVME_CLI=1 00:00:44.817 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.817 SPDK_TEST_NVMF_NICS=e810 00:00:44.817 SPDK_TEST_VFIOUSER=1 00:00:44.817 SPDK_RUN_UBSAN=1 00:00:44.817 NET_TYPE=phy 00:00:44.817 RUN_NIGHTLY=0 18:26:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:44.817 18:26:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:44.817 18:26:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:44.817 18:26:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:44.817 18:26:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.817 18:26:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.817 18:26:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.817 18:26:01 -- paths/export.sh@5 -- $ export PATH 00:00:44.817 18:26:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.817 18:26:01 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:44.817 18:26:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:44.817 18:26:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721060761.XXXXXX 00:00:45.076 18:26:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721060761.1bQq2Q 00:00:45.076 18:26:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:45.076 18:26:01 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:45.076 18:26:01 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:45.076 18:26:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:45.076 18:26:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:45.076 18:26:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:45.076 18:26:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:45.076 18:26:01 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.076 18:26:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:45.076 18:26:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:45.076 18:26:01 -- pm/common@17 -- $ local monitor 00:00:45.076 18:26:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.076 18:26:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.076 18:26:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.076 18:26:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.076 18:26:01 -- pm/common@25 -- $ sleep 1 00:00:45.076 18:26:01 -- pm/common@21 -- $ date +%s 00:00:45.076 18:26:01 -- pm/common@21 -- $ date +%s 00:00:45.076 18:26:01 -- pm/common@21 -- $ date +%s 00:00:45.076 18:26:01 -- pm/common@21 -- $ date +%s 00:00:45.076 18:26:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060761 00:00:45.076 18:26:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060761 00:00:45.076 18:26:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060761 00:00:45.076 18:26:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721060761 00:00:45.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060761_collect-vmstat.pm.log 00:00:45.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060761_collect-cpu-temp.pm.log 00:00:45.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060761_collect-cpu-load.pm.log 00:00:45.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721060761_collect-bmc-pm.bmc.pm.log 00:00:46.015 18:26:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:46.015 18:26:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:46.015 18:26:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:46.015 18:26:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.015 18:26:02 -- spdk/autobuild.sh@16 -- $ date -u 00:00:46.015 Mon Jul 15 04:26:02 PM UTC 2024 00:00:46.015 18:26:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:46.015 v24.09-pre-204-gabb6b4c21 00:00:46.015 18:26:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:46.015 18:26:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:46.015 18:26:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:46.015 18:26:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:46.015 18:26:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:46.015 18:26:02 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.015 ************************************ 00:00:46.015 START TEST ubsan 00:00:46.015 ************************************ 00:00:46.015 18:26:02 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:46.015 using ubsan 00:00:46.015 00:00:46.015 real 0m0.000s 00:00:46.015 user 0m0.000s 00:00:46.015 sys 0m0.000s 00:00:46.015 18:26:02 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:46.015 18:26:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:46.015 ************************************ 00:00:46.015 END TEST ubsan 00:00:46.015 ************************************ 00:00:46.015 18:26:02 -- common/autotest_common.sh@1142 -- $ return 0 00:00:46.015 18:26:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:46.015 18:26:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:46.015 18:26:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:46.015 18:26:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:46.015 18:26:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:46.015 18:26:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:46.015 18:26:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:46.015 18:26:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:46.015 18:26:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:46.275 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:46.275 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:46.534 Using 'verbs' RDMA provider 00:00:59.380 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:09.363 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:09.363 Creating mk/config.mk...done. 00:01:09.363 Creating mk/cc.flags.mk...done. 00:01:09.363 Type 'make' to build. 00:01:09.363 18:26:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:09.363 18:26:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.363 18:26:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.363 18:26:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.363 ************************************ 00:01:09.363 START TEST make 00:01:09.363 ************************************ 00:01:09.363 18:26:25 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:09.622 make[1]: Nothing to be done for 'all'. 00:01:11.004 The Meson build system 00:01:11.004 Version: 1.3.1 00:01:11.004 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:11.004 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:11.004 Build type: native build 00:01:11.004 Project name: libvfio-user 00:01:11.004 Project version: 0.0.1 00:01:11.004 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:11.004 C linker for the host machine: cc ld.bfd 2.39-16 00:01:11.004 Host machine cpu family: x86_64 00:01:11.004 Host machine cpu: x86_64 00:01:11.004 Run-time dependency threads found: YES 00:01:11.004 Library dl found: YES 00:01:11.004 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:11.004 Run-time dependency json-c found: YES 0.17 00:01:11.004 Run-time dependency cmocka found: YES 1.1.7 00:01:11.004 Program pytest-3 found: NO 00:01:11.004 Program flake8 found: NO 00:01:11.004 Program misspell-fixer found: NO 00:01:11.004 Program restructuredtext-lint found: NO 00:01:11.004 Program valgrind found: YES (/usr/bin/valgrind) 00:01:11.004 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:11.004 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:11.004 Compiler for C supports arguments -Wwrite-strings: YES 00:01:11.004 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:11.004 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:11.004 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:11.004 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:11.004 Build targets in project: 8 00:01:11.004 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:11.004 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:11.004 00:01:11.004 libvfio-user 0.0.1 00:01:11.004 00:01:11.004 User defined options 00:01:11.004 buildtype : debug 00:01:11.004 default_library: shared 00:01:11.004 libdir : /usr/local/lib 00:01:11.004 00:01:11.004 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.269 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:11.269 [1/37] Compiling C object samples/null.p/null.c.o 00:01:11.269 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:11.269 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:11.269 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:11.269 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:11.269 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:11.269 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:11.269 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:11.269 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:11.269 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:11.269 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:11.269 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:11.269 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:11.270 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:11.270 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:11.270 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:11.270 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:11.528 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:11.528 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:11.528 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:11.528 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:11.528 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:11.528 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:11.528 [24/37] Compiling C object samples/server.p/server.c.o 00:01:11.528 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:11.528 [26/37] Compiling C object samples/client.p/client.c.o 00:01:11.528 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:11.528 [28/37] Linking target samples/client 00:01:11.528 [29/37] Linking target test/unit_tests 00:01:11.528 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:11.528 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:11.787 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:11.787 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:11.787 [34/37] Linking target samples/server 00:01:11.787 [35/37] Linking target samples/null 00:01:11.787 [36/37] Linking target samples/lspci 00:01:11.787 [37/37] Linking target samples/gpio-pci-idio-16 00:01:11.787 INFO: autodetecting backend as ninja 00:01:11.787 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:11.787 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:12.046 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:12.306 ninja: no work to do. 00:01:16.500 The Meson build system 00:01:16.500 Version: 1.3.1 00:01:16.500 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:16.500 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:16.500 Build type: native build 00:01:16.500 Program cat found: YES (/usr/bin/cat) 00:01:16.500 Project name: DPDK 00:01:16.500 Project version: 24.03.0 00:01:16.500 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:16.500 C linker for the host machine: cc ld.bfd 2.39-16 00:01:16.500 Host machine cpu family: x86_64 00:01:16.500 Host machine cpu: x86_64 00:01:16.500 Message: ## Building in Developer Mode ## 00:01:16.500 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:16.500 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:16.500 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:16.500 Program python3 found: YES (/usr/bin/python3) 00:01:16.500 Program cat found: YES (/usr/bin/cat) 00:01:16.500 Compiler for C supports arguments -march=native: YES 00:01:16.500 Checking for size of "void *" : 8 00:01:16.500 Checking for size of "void *" : 8 (cached) 00:01:16.500 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:16.500 Library m found: YES 00:01:16.500 Library numa found: YES 00:01:16.500 Has header "numaif.h" : YES 00:01:16.500 Library fdt found: NO 00:01:16.500 Library execinfo found: NO 00:01:16.500 Has header "execinfo.h" : YES 00:01:16.500 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:16.500 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:16.500 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:16.500 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:16.500 Run-time dependency openssl found: YES 3.0.9 00:01:16.500 Run-time dependency libpcap found: YES 1.10.4 00:01:16.500 Has header "pcap.h" with dependency libpcap: YES 00:01:16.500 Compiler for C supports arguments -Wcast-qual: YES 00:01:16.500 Compiler for C supports arguments -Wdeprecated: YES 00:01:16.500 Compiler for C supports arguments -Wformat: YES 00:01:16.500 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:16.500 Compiler for C supports arguments -Wformat-security: NO 00:01:16.500 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.500 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:16.500 Compiler for C supports arguments -Wnested-externs: YES 00:01:16.500 Compiler for C supports arguments -Wold-style-definition: YES 00:01:16.500 Compiler for C supports arguments -Wpointer-arith: YES 00:01:16.500 Compiler for C supports arguments -Wsign-compare: YES 00:01:16.500 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:16.500 Compiler for C supports arguments -Wundef: YES 00:01:16.500 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.500 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:16.500 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:16.500 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.500 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:16.500 Program objdump found: YES (/usr/bin/objdump) 00:01:16.500 Compiler for C supports arguments -mavx512f: YES 00:01:16.500 Checking if "AVX512 checking" compiles: YES 00:01:16.500 Fetching value of define "__SSE4_2__" : 1 00:01:16.500 Fetching value of define "__AES__" : 1 00:01:16.500 Fetching value of define "__AVX__" : 1 00:01:16.500 Fetching value of define "__AVX2__" : 1 00:01:16.500 Fetching value of define "__AVX512BW__" : 1 00:01:16.500 Fetching value of define "__AVX512CD__" : 1 00:01:16.500 Fetching value of define "__AVX512DQ__" : 1 00:01:16.501 Fetching value of define "__AVX512F__" : 1 00:01:16.501 Fetching value of define "__AVX512VL__" : 1 00:01:16.501 Fetching value of define "__PCLMUL__" : 1 00:01:16.501 Fetching value of define "__RDRND__" : 1 00:01:16.501 Fetching value of define "__RDSEED__" : 1 00:01:16.501 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:16.501 Fetching value of define "__znver1__" : (undefined) 00:01:16.501 Fetching value of define "__znver2__" : (undefined) 00:01:16.501 Fetching value of define "__znver3__" : (undefined) 00:01:16.501 Fetching value of define "__znver4__" : (undefined) 00:01:16.501 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:16.501 Message: lib/log: Defining dependency "log" 00:01:16.501 Message: lib/kvargs: Defining dependency "kvargs" 00:01:16.501 Message: lib/telemetry: Defining dependency "telemetry" 00:01:16.501 Checking for function "getentropy" : NO 00:01:16.501 Message: lib/eal: Defining dependency "eal" 00:01:16.501 Message: lib/ring: Defining dependency "ring" 00:01:16.501 Message: lib/rcu: Defining dependency "rcu" 00:01:16.501 Message: lib/mempool: Defining dependency "mempool" 00:01:16.501 Message: lib/mbuf: Defining dependency "mbuf" 00:01:16.501 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:16.501 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:16.501 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:16.501 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:16.501 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:16.501 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:16.501 Compiler for C supports arguments -mpclmul: YES 00:01:16.501 Compiler for C supports arguments -maes: YES 00:01:16.501 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:16.501 Compiler for C supports arguments -mavx512bw: YES 00:01:16.501 Compiler for C supports arguments -mavx512dq: YES 00:01:16.501 Compiler for C supports arguments -mavx512vl: YES 00:01:16.501 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:16.501 Compiler for C supports arguments -mavx2: YES 00:01:16.501 Compiler for C supports arguments -mavx: YES 00:01:16.501 Message: lib/net: Defining dependency "net" 00:01:16.501 Message: lib/meter: Defining dependency "meter" 00:01:16.501 Message: lib/ethdev: Defining dependency "ethdev" 00:01:16.501 Message: lib/pci: Defining dependency "pci" 00:01:16.501 Message: lib/cmdline: Defining dependency "cmdline" 00:01:16.501 Message: lib/hash: Defining dependency "hash" 00:01:16.501 Message: lib/timer: Defining dependency "timer" 00:01:16.501 Message: lib/compressdev: Defining dependency "compressdev" 00:01:16.501 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:16.501 Message: lib/dmadev: Defining dependency "dmadev" 00:01:16.501 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:16.501 Message: lib/power: Defining dependency "power" 00:01:16.501 Message: lib/reorder: Defining dependency "reorder" 00:01:16.501 Message: lib/security: Defining dependency "security" 00:01:16.501 Has header "linux/userfaultfd.h" : YES 00:01:16.501 Has header "linux/vduse.h" : YES 00:01:16.501 Message: lib/vhost: Defining dependency "vhost" 00:01:16.501 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:16.501 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:16.501 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:16.501 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:16.501 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:16.501 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:16.501 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:16.501 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:16.501 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:16.501 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:16.501 Program doxygen found: YES (/usr/bin/doxygen) 00:01:16.501 Configuring doxy-api-html.conf using configuration 00:01:16.501 Configuring doxy-api-man.conf using configuration 00:01:16.501 Program mandb found: YES (/usr/bin/mandb) 00:01:16.501 Program sphinx-build found: NO 00:01:16.501 Configuring rte_build_config.h using configuration 00:01:16.501 Message: 00:01:16.501 ================= 00:01:16.501 Applications Enabled 00:01:16.501 ================= 00:01:16.501 00:01:16.501 apps: 00:01:16.501 00:01:16.501 00:01:16.501 Message: 00:01:16.501 ================= 00:01:16.501 Libraries Enabled 00:01:16.501 ================= 00:01:16.501 00:01:16.501 libs: 00:01:16.501 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:16.501 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:16.501 cryptodev, dmadev, power, reorder, security, vhost, 00:01:16.501 00:01:16.501 Message: 00:01:16.501 =============== 00:01:16.501 Drivers Enabled 00:01:16.501 =============== 00:01:16.501 00:01:16.501 common: 00:01:16.501 00:01:16.501 bus: 00:01:16.501 pci, vdev, 00:01:16.501 mempool: 00:01:16.501 ring, 00:01:16.501 dma: 00:01:16.501 00:01:16.501 net: 00:01:16.501 00:01:16.501 crypto: 00:01:16.501 00:01:16.501 compress: 00:01:16.501 00:01:16.501 vdpa: 00:01:16.501 00:01:16.501 00:01:16.501 Message: 00:01:16.501 ================= 00:01:16.501 Content Skipped 00:01:16.501 ================= 00:01:16.501 00:01:16.501 apps: 00:01:16.501 dumpcap: explicitly disabled via build config 00:01:16.501 graph: explicitly disabled via build config 00:01:16.501 pdump: explicitly disabled via build config 00:01:16.501 proc-info: explicitly disabled via build config 00:01:16.501 test-acl: explicitly disabled via build config 00:01:16.501 test-bbdev: explicitly disabled via build config 00:01:16.501 test-cmdline: explicitly disabled via build config 00:01:16.501 test-compress-perf: explicitly disabled via build config 00:01:16.501 test-crypto-perf: explicitly disabled via build config 00:01:16.501 test-dma-perf: explicitly disabled via build config 00:01:16.501 test-eventdev: explicitly disabled via build config 00:01:16.501 test-fib: explicitly disabled via build config 00:01:16.501 test-flow-perf: explicitly disabled via build config 00:01:16.501 test-gpudev: explicitly disabled via build config 00:01:16.501 test-mldev: explicitly disabled via build config 00:01:16.501 test-pipeline: explicitly disabled via build config 00:01:16.501 test-pmd: explicitly disabled via build config 00:01:16.501 test-regex: explicitly disabled via build config 00:01:16.501 test-sad: explicitly disabled via build config 00:01:16.501 test-security-perf: explicitly disabled via build config 00:01:16.501 00:01:16.501 libs: 00:01:16.501 argparse: explicitly disabled via build config 00:01:16.501 metrics: explicitly disabled via build config 00:01:16.501 acl: explicitly disabled via build config 00:01:16.501 bbdev: explicitly disabled via build config 00:01:16.501 bitratestats: explicitly disabled via build config 00:01:16.501 bpf: explicitly disabled via build config 00:01:16.501 cfgfile: explicitly disabled via build config 00:01:16.501 distributor: explicitly disabled via build config 00:01:16.501 efd: explicitly disabled via build config 00:01:16.501 eventdev: explicitly disabled via build config 00:01:16.501 dispatcher: explicitly disabled via build config 00:01:16.501 gpudev: explicitly disabled via build config 00:01:16.501 gro: explicitly disabled via build config 00:01:16.501 gso: explicitly disabled via build config 00:01:16.501 ip_frag: explicitly disabled via build config 00:01:16.501 jobstats: explicitly disabled via build config 00:01:16.501 latencystats: explicitly disabled via build config 00:01:16.501 lpm: explicitly disabled via build config 00:01:16.501 member: explicitly disabled via build config 00:01:16.501 pcapng: explicitly disabled via build config 00:01:16.501 rawdev: explicitly disabled via build config 00:01:16.501 regexdev: explicitly disabled via build config 00:01:16.501 mldev: explicitly disabled via build config 00:01:16.501 rib: explicitly disabled via build config 00:01:16.501 sched: explicitly disabled via build config 00:01:16.501 stack: explicitly disabled via build config 00:01:16.501 ipsec: explicitly disabled via build config 00:01:16.501 pdcp: explicitly disabled via build config 00:01:16.501 fib: explicitly disabled via build config 00:01:16.501 port: explicitly disabled via build config 00:01:16.501 pdump: explicitly disabled via build config 00:01:16.501 table: explicitly disabled via build config 00:01:16.501 pipeline: explicitly disabled via build config 00:01:16.501 graph: explicitly disabled via build config 00:01:16.501 node: explicitly disabled via build config 00:01:16.501 00:01:16.501 drivers: 00:01:16.501 common/cpt: not in enabled drivers build config 00:01:16.501 common/dpaax: not in enabled drivers build config 00:01:16.501 common/iavf: not in enabled drivers build config 00:01:16.501 common/idpf: not in enabled drivers build config 00:01:16.501 common/ionic: not in enabled drivers build config 00:01:16.501 common/mvep: not in enabled drivers build config 00:01:16.501 common/octeontx: not in enabled drivers build config 00:01:16.501 bus/auxiliary: not in enabled drivers build config 00:01:16.501 bus/cdx: not in enabled drivers build config 00:01:16.501 bus/dpaa: not in enabled drivers build config 00:01:16.501 bus/fslmc: not in enabled drivers build config 00:01:16.501 bus/ifpga: not in enabled drivers build config 00:01:16.501 bus/platform: not in enabled drivers build config 00:01:16.501 bus/uacce: not in enabled drivers build config 00:01:16.501 bus/vmbus: not in enabled drivers build config 00:01:16.501 common/cnxk: not in enabled drivers build config 00:01:16.501 common/mlx5: not in enabled drivers build config 00:01:16.501 common/nfp: not in enabled drivers build config 00:01:16.501 common/nitrox: not in enabled drivers build config 00:01:16.501 common/qat: not in enabled drivers build config 00:01:16.501 common/sfc_efx: not in enabled drivers build config 00:01:16.501 mempool/bucket: not in enabled drivers build config 00:01:16.501 mempool/cnxk: not in enabled drivers build config 00:01:16.501 mempool/dpaa: not in enabled drivers build config 00:01:16.501 mempool/dpaa2: not in enabled drivers build config 00:01:16.501 mempool/octeontx: not in enabled drivers build config 00:01:16.501 mempool/stack: not in enabled drivers build config 00:01:16.501 dma/cnxk: not in enabled drivers build config 00:01:16.501 dma/dpaa: not in enabled drivers build config 00:01:16.501 dma/dpaa2: not in enabled drivers build config 00:01:16.501 dma/hisilicon: not in enabled drivers build config 00:01:16.501 dma/idxd: not in enabled drivers build config 00:01:16.501 dma/ioat: not in enabled drivers build config 00:01:16.501 dma/skeleton: not in enabled drivers build config 00:01:16.501 net/af_packet: not in enabled drivers build config 00:01:16.501 net/af_xdp: not in enabled drivers build config 00:01:16.501 net/ark: not in enabled drivers build config 00:01:16.501 net/atlantic: not in enabled drivers build config 00:01:16.501 net/avp: not in enabled drivers build config 00:01:16.501 net/axgbe: not in enabled drivers build config 00:01:16.501 net/bnx2x: not in enabled drivers build config 00:01:16.501 net/bnxt: not in enabled drivers build config 00:01:16.502 net/bonding: not in enabled drivers build config 00:01:16.502 net/cnxk: not in enabled drivers build config 00:01:16.502 net/cpfl: not in enabled drivers build config 00:01:16.502 net/cxgbe: not in enabled drivers build config 00:01:16.502 net/dpaa: not in enabled drivers build config 00:01:16.502 net/dpaa2: not in enabled drivers build config 00:01:16.502 net/e1000: not in enabled drivers build config 00:01:16.502 net/ena: not in enabled drivers build config 00:01:16.502 net/enetc: not in enabled drivers build config 00:01:16.502 net/enetfec: not in enabled drivers build config 00:01:16.502 net/enic: not in enabled drivers build config 00:01:16.502 net/failsafe: not in enabled drivers build config 00:01:16.502 net/fm10k: not in enabled drivers build config 00:01:16.502 net/gve: not in enabled drivers build config 00:01:16.502 net/hinic: not in enabled drivers build config 00:01:16.502 net/hns3: not in enabled drivers build config 00:01:16.502 net/i40e: not in enabled drivers build config 00:01:16.502 net/iavf: not in enabled drivers build config 00:01:16.502 net/ice: not in enabled drivers build config 00:01:16.502 net/idpf: not in enabled drivers build config 00:01:16.502 net/igc: not in enabled drivers build config 00:01:16.502 net/ionic: not in enabled drivers build config 00:01:16.502 net/ipn3ke: not in enabled drivers build config 00:01:16.502 net/ixgbe: not in enabled drivers build config 00:01:16.502 net/mana: not in enabled drivers build config 00:01:16.502 net/memif: not in enabled drivers build config 00:01:16.502 net/mlx4: not in enabled drivers build config 00:01:16.502 net/mlx5: not in enabled drivers build config 00:01:16.502 net/mvneta: not in enabled drivers build config 00:01:16.502 net/mvpp2: not in enabled drivers build config 00:01:16.502 net/netvsc: not in enabled drivers build config 00:01:16.502 net/nfb: not in enabled drivers build config 00:01:16.502 net/nfp: not in enabled drivers build config 00:01:16.502 net/ngbe: not in enabled drivers build config 00:01:16.502 net/null: not in enabled drivers build config 00:01:16.502 net/octeontx: not in enabled drivers build config 00:01:16.502 net/octeon_ep: not in enabled drivers build config 00:01:16.502 net/pcap: not in enabled drivers build config 00:01:16.502 net/pfe: not in enabled drivers build config 00:01:16.502 net/qede: not in enabled drivers build config 00:01:16.502 net/ring: not in enabled drivers build config 00:01:16.502 net/sfc: not in enabled drivers build config 00:01:16.502 net/softnic: not in enabled drivers build config 00:01:16.502 net/tap: not in enabled drivers build config 00:01:16.502 net/thunderx: not in enabled drivers build config 00:01:16.502 net/txgbe: not in enabled drivers build config 00:01:16.502 net/vdev_netvsc: not in enabled drivers build config 00:01:16.502 net/vhost: not in enabled drivers build config 00:01:16.502 net/virtio: not in enabled drivers build config 00:01:16.502 net/vmxnet3: not in enabled drivers build config 00:01:16.502 raw/*: missing internal dependency, "rawdev" 00:01:16.502 crypto/armv8: not in enabled drivers build config 00:01:16.502 crypto/bcmfs: not in enabled drivers build config 00:01:16.502 crypto/caam_jr: not in enabled drivers build config 00:01:16.502 crypto/ccp: not in enabled drivers build config 00:01:16.502 crypto/cnxk: not in enabled drivers build config 00:01:16.502 crypto/dpaa_sec: not in enabled drivers build config 00:01:16.502 crypto/dpaa2_sec: not in enabled drivers build config 00:01:16.502 crypto/ipsec_mb: not in enabled drivers build config 00:01:16.502 crypto/mlx5: not in enabled drivers build config 00:01:16.502 crypto/mvsam: not in enabled drivers build config 00:01:16.502 crypto/nitrox: not in enabled drivers build config 00:01:16.502 crypto/null: not in enabled drivers build config 00:01:16.502 crypto/octeontx: not in enabled drivers build config 00:01:16.502 crypto/openssl: not in enabled drivers build config 00:01:16.502 crypto/scheduler: not in enabled drivers build config 00:01:16.502 crypto/uadk: not in enabled drivers build config 00:01:16.502 crypto/virtio: not in enabled drivers build config 00:01:16.502 compress/isal: not in enabled drivers build config 00:01:16.502 compress/mlx5: not in enabled drivers build config 00:01:16.502 compress/nitrox: not in enabled drivers build config 00:01:16.502 compress/octeontx: not in enabled drivers build config 00:01:16.502 compress/zlib: not in enabled drivers build config 00:01:16.502 regex/*: missing internal dependency, "regexdev" 00:01:16.502 ml/*: missing internal dependency, "mldev" 00:01:16.502 vdpa/ifc: not in enabled drivers build config 00:01:16.502 vdpa/mlx5: not in enabled drivers build config 00:01:16.502 vdpa/nfp: not in enabled drivers build config 00:01:16.502 vdpa/sfc: not in enabled drivers build config 00:01:16.502 event/*: missing internal dependency, "eventdev" 00:01:16.502 baseband/*: missing internal dependency, "bbdev" 00:01:16.502 gpu/*: missing internal dependency, "gpudev" 00:01:16.502 00:01:16.502 00:01:16.761 Build targets in project: 85 00:01:16.761 00:01:16.761 DPDK 24.03.0 00:01:16.761 00:01:16.761 User defined options 00:01:16.761 buildtype : debug 00:01:16.761 default_library : shared 00:01:16.761 libdir : lib 00:01:16.761 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.761 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:16.761 c_link_args : 00:01:16.761 cpu_instruction_set: native 00:01:16.761 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:16.761 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:16.761 enable_docs : false 00:01:16.761 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:16.761 enable_kmods : false 00:01:16.761 max_lcores : 128 00:01:16.761 tests : false 00:01:16.761 00:01:16.761 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:17.339 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:17.339 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:17.339 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:17.339 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:17.339 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:17.339 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:17.339 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:17.339 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:17.339 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:17.339 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:17.339 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:17.339 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:17.339 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:17.339 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:17.339 [14/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:17.601 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:17.601 [16/268] Linking static target lib/librte_kvargs.a 00:01:17.601 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:17.601 [18/268] Linking static target lib/librte_log.a 00:01:17.601 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:17.601 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:17.601 [21/268] Linking static target lib/librte_pci.a 00:01:17.601 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:17.601 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:17.601 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:17.601 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:17.860 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:17.860 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:17.860 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:17.861 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:17.861 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:17.861 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:17.861 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:17.861 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:17.861 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:17.861 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:17.861 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:17.861 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:17.861 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:17.861 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:17.861 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:17.861 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:17.861 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:17.861 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:17.861 [44/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:17.861 [45/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:17.861 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:17.861 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:17.861 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:17.861 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:17.861 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:17.861 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:17.861 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:17.861 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:17.861 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:17.861 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:17.861 [56/268] Linking static target lib/librte_meter.a 00:01:17.861 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:17.861 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:17.861 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:17.861 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:17.861 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:17.861 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:17.861 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:17.861 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:17.861 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:17.861 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:17.861 [67/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:17.861 [68/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:17.861 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:17.861 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:17.861 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:17.861 [72/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:17.861 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:17.861 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:17.861 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:17.861 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:17.861 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:17.861 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:17.861 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:17.861 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:17.861 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:17.861 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:17.861 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:17.861 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:17.861 [85/268] Linking static target lib/librte_telemetry.a 00:01:17.861 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:17.861 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:17.861 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:17.861 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:17.861 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:17.861 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:17.861 [92/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.861 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:17.861 [94/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:18.120 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:18.120 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:18.120 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:18.120 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:18.120 [99/268] Linking static target lib/librte_ring.a 00:01:18.120 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:18.120 [101/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.120 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:18.120 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:18.120 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:18.120 [105/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:18.120 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:18.120 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:18.120 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:18.120 [109/268] Linking static target lib/librte_rcu.a 00:01:18.120 [110/268] Linking static target lib/librte_net.a 00:01:18.120 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:18.120 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:18.120 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:18.120 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:18.120 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:18.120 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:18.120 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:18.120 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:18.120 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:18.120 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:18.120 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:18.120 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:18.120 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:18.120 [124/268] Linking static target lib/librte_mempool.a 00:01:18.120 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:18.120 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:18.120 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:18.120 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:18.120 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:18.120 [130/268] Linking static target lib/librte_eal.a 00:01:18.121 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:18.121 [132/268] Linking static target lib/librte_cmdline.a 00:01:18.121 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.121 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:18.121 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.380 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:18.380 [137/268] Linking target lib/librte_log.so.24.1 00:01:18.380 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:18.380 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:18.380 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:18.380 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:18.380 [142/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.380 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:18.380 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.380 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.380 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:18.380 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:18.380 [148/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:18.380 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:18.380 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:18.380 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:18.380 [152/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:18.380 [153/268] Linking static target lib/librte_mbuf.a 00:01:18.380 [154/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:18.381 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:18.381 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:18.381 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:18.381 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:18.381 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:18.381 [160/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:18.381 [161/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.381 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:18.381 [163/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:18.381 [164/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:18.381 [165/268] Linking target lib/librte_kvargs.so.24.1 00:01:18.381 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:18.381 [167/268] Linking target lib/librte_telemetry.so.24.1 00:01:18.381 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:18.381 [169/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:18.381 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:18.381 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:18.381 [172/268] Linking static target lib/librte_timer.a 00:01:18.381 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:18.381 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:18.381 [175/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:18.381 [176/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:18.640 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:18.640 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:18.640 [179/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:18.640 [180/268] Linking static target lib/librte_power.a 00:01:18.640 [181/268] Linking static target lib/librte_dmadev.a 00:01:18.640 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:18.640 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:18.640 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:18.640 [185/268] Linking static target lib/librte_compressdev.a 00:01:18.640 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:18.640 [187/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:18.640 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:18.640 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:18.640 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:18.640 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:18.640 [192/268] Linking static target lib/librte_reorder.a 00:01:18.640 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:18.640 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:18.640 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:18.640 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.640 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.640 [198/268] Linking static target lib/librte_security.a 00:01:18.640 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:18.640 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:18.640 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:18.640 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:18.640 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:18.640 [204/268] Linking static target drivers/librte_mempool_ring.a 00:01:18.640 [205/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:18.640 [206/268] Linking static target lib/librte_hash.a 00:01:18.899 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:18.899 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:18.899 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:18.899 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.899 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:18.899 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.899 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:18.899 [214/268] Linking static target lib/librte_cryptodev.a 00:01:18.899 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.899 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.158 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:19.158 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.158 [219/268] Linking static target lib/librte_ethdev.a 00:01:19.158 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.158 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.158 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.158 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.417 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:19.417 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.417 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.676 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.244 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:20.503 [229/268] Linking static target lib/librte_vhost.a 00:01:20.503 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.410 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.683 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.683 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.683 [234/268] Linking target lib/librte_eal.so.24.1 00:01:27.683 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:27.683 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:27.683 [237/268] Linking target lib/librte_ring.so.24.1 00:01:27.683 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:27.683 [239/268] Linking target lib/librte_meter.so.24.1 00:01:27.683 [240/268] Linking target lib/librte_pci.so.24.1 00:01:27.683 [241/268] Linking target lib/librte_timer.so.24.1 00:01:27.683 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:27.683 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:27.941 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:27.941 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:27.941 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:27.941 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:27.941 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:27.941 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:27.941 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:27.941 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:27.941 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:27.941 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:28.205 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:28.205 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:28.205 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:28.205 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:28.205 [258/268] Linking target lib/librte_net.so.24.1 00:01:28.205 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:28.525 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:28.525 [261/268] Linking target lib/librte_security.so.24.1 00:01:28.525 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:28.525 [263/268] Linking target lib/librte_hash.so.24.1 00:01:28.525 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:28.525 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:28.525 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:28.525 [267/268] Linking target lib/librte_power.so.24.1 00:01:28.525 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:28.525 INFO: autodetecting backend as ninja 00:01:28.525 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:29.462 CC lib/ut_mock/mock.o 00:01:29.462 CC lib/ut/ut.o 00:01:29.462 CC lib/log/log.o 00:01:29.462 CC lib/log/log_deprecated.o 00:01:29.462 CC lib/log/log_flags.o 00:01:29.721 LIB libspdk_ut.a 00:01:29.721 LIB libspdk_ut_mock.a 00:01:29.721 LIB libspdk_log.a 00:01:29.721 SO libspdk_ut.so.2.0 00:01:29.721 SO libspdk_ut_mock.so.6.0 00:01:29.721 SO libspdk_log.so.7.0 00:01:29.721 SYMLINK libspdk_ut.so 00:01:29.721 SYMLINK libspdk_ut_mock.so 00:01:29.721 SYMLINK libspdk_log.so 00:01:29.980 CC lib/dma/dma.o 00:01:29.980 CC lib/ioat/ioat.o 00:01:30.239 CXX lib/trace_parser/trace.o 00:01:30.239 CC lib/util/base64.o 00:01:30.239 CC lib/util/bit_array.o 00:01:30.239 CC lib/util/cpuset.o 00:01:30.239 CC lib/util/crc16.o 00:01:30.239 CC lib/util/crc32_ieee.o 00:01:30.239 CC lib/util/crc32.o 00:01:30.239 CC lib/util/crc32c.o 00:01:30.239 CC lib/util/dif.o 00:01:30.239 CC lib/util/crc64.o 00:01:30.239 CC lib/util/fd.o 00:01:30.239 CC lib/util/file.o 00:01:30.239 CC lib/util/hexlify.o 00:01:30.239 CC lib/util/iov.o 00:01:30.239 CC lib/util/math.o 00:01:30.239 CC lib/util/pipe.o 00:01:30.239 CC lib/util/strerror_tls.o 00:01:30.239 CC lib/util/string.o 00:01:30.239 CC lib/util/uuid.o 00:01:30.239 CC lib/util/fd_group.o 00:01:30.239 CC lib/util/xor.o 00:01:30.239 CC lib/util/zipf.o 00:01:30.239 CC lib/vfio_user/host/vfio_user_pci.o 00:01:30.239 CC lib/vfio_user/host/vfio_user.o 00:01:30.239 LIB libspdk_dma.a 00:01:30.239 SO libspdk_dma.so.4.0 00:01:30.239 LIB libspdk_ioat.a 00:01:30.498 SYMLINK libspdk_dma.so 00:01:30.498 SO libspdk_ioat.so.7.0 00:01:30.498 SYMLINK libspdk_ioat.so 00:01:30.498 LIB libspdk_vfio_user.a 00:01:30.498 SO libspdk_vfio_user.so.5.0 00:01:30.498 LIB libspdk_util.a 00:01:30.498 SYMLINK libspdk_vfio_user.so 00:01:30.498 SO libspdk_util.so.9.1 00:01:30.757 SYMLINK libspdk_util.so 00:01:30.757 LIB libspdk_trace_parser.a 00:01:30.757 SO libspdk_trace_parser.so.5.0 00:01:31.016 SYMLINK libspdk_trace_parser.so 00:01:31.016 CC lib/rdma_provider/common.o 00:01:31.016 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:31.016 CC lib/conf/conf.o 00:01:31.016 CC lib/json/json_parse.o 00:01:31.016 CC lib/env_dpdk/env.o 00:01:31.016 CC lib/env_dpdk/memory.o 00:01:31.016 CC lib/env_dpdk/init.o 00:01:31.016 CC lib/env_dpdk/pci.o 00:01:31.016 CC lib/json/json_util.o 00:01:31.016 CC lib/json/json_write.o 00:01:31.016 CC lib/env_dpdk/pci_ioat.o 00:01:31.016 CC lib/env_dpdk/threads.o 00:01:31.016 CC lib/env_dpdk/pci_vmd.o 00:01:31.016 CC lib/env_dpdk/pci_virtio.o 00:01:31.016 CC lib/rdma_utils/rdma_utils.o 00:01:31.016 CC lib/env_dpdk/pci_idxd.o 00:01:31.016 CC lib/env_dpdk/pci_event.o 00:01:31.016 CC lib/env_dpdk/sigbus_handler.o 00:01:31.016 CC lib/env_dpdk/pci_dpdk.o 00:01:31.016 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:31.016 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:31.016 CC lib/vmd/vmd.o 00:01:31.016 CC lib/vmd/led.o 00:01:31.016 CC lib/idxd/idxd.o 00:01:31.016 CC lib/idxd/idxd_user.o 00:01:31.016 CC lib/idxd/idxd_kernel.o 00:01:31.275 LIB libspdk_rdma_provider.a 00:01:31.275 LIB libspdk_conf.a 00:01:31.275 SO libspdk_rdma_provider.so.6.0 00:01:31.275 SO libspdk_conf.so.6.0 00:01:31.275 LIB libspdk_rdma_utils.a 00:01:31.275 LIB libspdk_json.a 00:01:31.275 SYMLINK libspdk_rdma_provider.so 00:01:31.275 SO libspdk_rdma_utils.so.1.0 00:01:31.275 SYMLINK libspdk_conf.so 00:01:31.275 SO libspdk_json.so.6.0 00:01:31.275 SYMLINK libspdk_rdma_utils.so 00:01:31.275 SYMLINK libspdk_json.so 00:01:31.534 LIB libspdk_idxd.a 00:01:31.534 LIB libspdk_vmd.a 00:01:31.534 SO libspdk_idxd.so.12.0 00:01:31.534 SO libspdk_vmd.so.6.0 00:01:31.534 SYMLINK libspdk_idxd.so 00:01:31.534 SYMLINK libspdk_vmd.so 00:01:31.534 CC lib/jsonrpc/jsonrpc_server.o 00:01:31.534 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:31.534 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:31.534 CC lib/jsonrpc/jsonrpc_client.o 00:01:31.793 LIB libspdk_jsonrpc.a 00:01:31.793 SO libspdk_jsonrpc.so.6.0 00:01:32.053 SYMLINK libspdk_jsonrpc.so 00:01:32.053 LIB libspdk_env_dpdk.a 00:01:32.053 SO libspdk_env_dpdk.so.14.1 00:01:32.312 SYMLINK libspdk_env_dpdk.so 00:01:32.312 CC lib/rpc/rpc.o 00:01:32.569 LIB libspdk_rpc.a 00:01:32.569 SO libspdk_rpc.so.6.0 00:01:32.569 SYMLINK libspdk_rpc.so 00:01:32.828 CC lib/notify/notify_rpc.o 00:01:32.828 CC lib/notify/notify.o 00:01:32.828 CC lib/trace/trace.o 00:01:32.828 CC lib/trace/trace_flags.o 00:01:32.828 CC lib/trace/trace_rpc.o 00:01:32.828 CC lib/keyring/keyring.o 00:01:32.828 CC lib/keyring/keyring_rpc.o 00:01:33.086 LIB libspdk_notify.a 00:01:33.086 SO libspdk_notify.so.6.0 00:01:33.086 LIB libspdk_keyring.a 00:01:33.086 LIB libspdk_trace.a 00:01:33.086 SYMLINK libspdk_notify.so 00:01:33.086 SO libspdk_keyring.so.1.0 00:01:33.086 SO libspdk_trace.so.10.0 00:01:33.086 SYMLINK libspdk_keyring.so 00:01:33.086 SYMLINK libspdk_trace.so 00:01:33.344 CC lib/sock/sock.o 00:01:33.344 CC lib/sock/sock_rpc.o 00:01:33.344 CC lib/thread/thread.o 00:01:33.344 CC lib/thread/iobuf.o 00:01:33.603 LIB libspdk_sock.a 00:01:33.862 SO libspdk_sock.so.10.0 00:01:33.862 SYMLINK libspdk_sock.so 00:01:34.120 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:34.120 CC lib/nvme/nvme_ctrlr.o 00:01:34.120 CC lib/nvme/nvme_fabric.o 00:01:34.120 CC lib/nvme/nvme_ns_cmd.o 00:01:34.120 CC lib/nvme/nvme_ns.o 00:01:34.120 CC lib/nvme/nvme_pcie_common.o 00:01:34.120 CC lib/nvme/nvme_pcie.o 00:01:34.120 CC lib/nvme/nvme_qpair.o 00:01:34.120 CC lib/nvme/nvme.o 00:01:34.120 CC lib/nvme/nvme_quirks.o 00:01:34.120 CC lib/nvme/nvme_transport.o 00:01:34.120 CC lib/nvme/nvme_discovery.o 00:01:34.120 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:34.120 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:34.120 CC lib/nvme/nvme_tcp.o 00:01:34.120 CC lib/nvme/nvme_opal.o 00:01:34.120 CC lib/nvme/nvme_io_msg.o 00:01:34.120 CC lib/nvme/nvme_poll_group.o 00:01:34.120 CC lib/nvme/nvme_auth.o 00:01:34.120 CC lib/nvme/nvme_zns.o 00:01:34.120 CC lib/nvme/nvme_stubs.o 00:01:34.120 CC lib/nvme/nvme_cuse.o 00:01:34.120 CC lib/nvme/nvme_vfio_user.o 00:01:34.120 CC lib/nvme/nvme_rdma.o 00:01:34.378 LIB libspdk_thread.a 00:01:34.378 SO libspdk_thread.so.10.1 00:01:34.636 SYMLINK libspdk_thread.so 00:01:34.915 CC lib/vfu_tgt/tgt_endpoint.o 00:01:34.915 CC lib/vfu_tgt/tgt_rpc.o 00:01:34.915 CC lib/init/json_config.o 00:01:34.915 CC lib/init/subsystem.o 00:01:34.915 CC lib/init/subsystem_rpc.o 00:01:34.915 CC lib/init/rpc.o 00:01:34.915 CC lib/blob/request.o 00:01:34.915 CC lib/blob/blobstore.o 00:01:34.915 CC lib/blob/zeroes.o 00:01:34.915 CC lib/virtio/virtio.o 00:01:34.915 CC lib/blob/blob_bs_dev.o 00:01:34.915 CC lib/virtio/virtio_pci.o 00:01:34.915 CC lib/virtio/virtio_vhost_user.o 00:01:34.915 CC lib/virtio/virtio_vfio_user.o 00:01:34.915 CC lib/accel/accel.o 00:01:34.915 CC lib/accel/accel_rpc.o 00:01:34.915 CC lib/accel/accel_sw.o 00:01:35.173 LIB libspdk_init.a 00:01:35.173 LIB libspdk_vfu_tgt.a 00:01:35.173 SO libspdk_init.so.5.0 00:01:35.173 SO libspdk_vfu_tgt.so.3.0 00:01:35.173 LIB libspdk_virtio.a 00:01:35.173 SYMLINK libspdk_init.so 00:01:35.173 SO libspdk_virtio.so.7.0 00:01:35.173 SYMLINK libspdk_vfu_tgt.so 00:01:35.173 SYMLINK libspdk_virtio.so 00:01:35.431 CC lib/event/app.o 00:01:35.431 CC lib/event/reactor.o 00:01:35.431 CC lib/event/log_rpc.o 00:01:35.431 CC lib/event/app_rpc.o 00:01:35.431 CC lib/event/scheduler_static.o 00:01:35.688 LIB libspdk_accel.a 00:01:35.688 SO libspdk_accel.so.15.1 00:01:35.688 LIB libspdk_nvme.a 00:01:35.688 SYMLINK libspdk_accel.so 00:01:35.688 LIB libspdk_event.a 00:01:35.688 SO libspdk_nvme.so.13.1 00:01:35.688 SO libspdk_event.so.14.0 00:01:35.947 SYMLINK libspdk_event.so 00:01:35.947 CC lib/bdev/bdev.o 00:01:35.947 CC lib/bdev/bdev_rpc.o 00:01:35.947 CC lib/bdev/part.o 00:01:35.947 CC lib/bdev/bdev_zone.o 00:01:35.947 CC lib/bdev/scsi_nvme.o 00:01:35.947 SYMLINK libspdk_nvme.so 00:01:36.881 LIB libspdk_blob.a 00:01:36.881 SO libspdk_blob.so.11.0 00:01:36.881 SYMLINK libspdk_blob.so 00:01:37.467 CC lib/blobfs/blobfs.o 00:01:37.467 CC lib/blobfs/tree.o 00:01:37.467 CC lib/lvol/lvol.o 00:01:37.725 LIB libspdk_bdev.a 00:01:37.725 SO libspdk_bdev.so.15.1 00:01:37.725 SYMLINK libspdk_bdev.so 00:01:37.983 LIB libspdk_blobfs.a 00:01:37.983 SO libspdk_blobfs.so.10.0 00:01:37.983 LIB libspdk_lvol.a 00:01:37.983 SO libspdk_lvol.so.10.0 00:01:37.983 SYMLINK libspdk_blobfs.so 00:01:37.983 SYMLINK libspdk_lvol.so 00:01:37.983 CC lib/ftl/ftl_core.o 00:01:37.983 CC lib/ftl/ftl_init.o 00:01:37.983 CC lib/ftl/ftl_layout.o 00:01:37.983 CC lib/ftl/ftl_debug.o 00:01:37.983 CC lib/ftl/ftl_io.o 00:01:37.983 CC lib/ftl/ftl_sb.o 00:01:37.983 CC lib/ftl/ftl_l2p.o 00:01:37.983 CC lib/ftl/ftl_band.o 00:01:37.983 CC lib/ftl/ftl_l2p_flat.o 00:01:37.983 CC lib/ftl/ftl_nv_cache.o 00:01:37.983 CC lib/ftl/ftl_rq.o 00:01:37.983 CC lib/ftl/ftl_band_ops.o 00:01:37.983 CC lib/ftl/ftl_writer.o 00:01:37.983 CC lib/ftl/ftl_reloc.o 00:01:37.983 CC lib/ftl/ftl_l2p_cache.o 00:01:38.241 CC lib/ftl/ftl_p2l.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:38.241 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:38.241 CC lib/nbd/nbd_rpc.o 00:01:38.241 CC lib/nbd/nbd.o 00:01:38.241 CC lib/ftl/utils/ftl_conf.o 00:01:38.241 CC lib/ftl/utils/ftl_md.o 00:01:38.241 CC lib/ftl/utils/ftl_bitmap.o 00:01:38.241 CC lib/ftl/utils/ftl_mempool.o 00:01:38.241 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:38.241 CC lib/ftl/utils/ftl_property.o 00:01:38.241 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:38.241 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:38.241 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:38.241 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:38.241 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:38.241 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:38.241 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:38.241 CC lib/nvmf/ctrlr_discovery.o 00:01:38.241 CC lib/nvmf/ctrlr.o 00:01:38.241 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:38.241 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:38.241 CC lib/nvmf/ctrlr_bdev.o 00:01:38.241 CC lib/ftl/base/ftl_base_dev.o 00:01:38.241 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:38.241 CC lib/nvmf/subsystem.o 00:01:38.241 CC lib/ftl/base/ftl_base_bdev.o 00:01:38.241 CC lib/nvmf/nvmf.o 00:01:38.241 CC lib/ftl/ftl_trace.o 00:01:38.241 CC lib/nvmf/nvmf_rpc.o 00:01:38.241 CC lib/scsi/lun.o 00:01:38.241 CC lib/scsi/dev.o 00:01:38.241 CC lib/nvmf/transport.o 00:01:38.241 CC lib/scsi/port.o 00:01:38.241 CC lib/nvmf/tcp.o 00:01:38.241 CC lib/ublk/ublk.o 00:01:38.241 CC lib/nvmf/stubs.o 00:01:38.241 CC lib/ublk/ublk_rpc.o 00:01:38.241 CC lib/scsi/scsi_pr.o 00:01:38.241 CC lib/nvmf/vfio_user.o 00:01:38.241 CC lib/scsi/scsi.o 00:01:38.241 CC lib/scsi/scsi_bdev.o 00:01:38.241 CC lib/scsi/scsi_rpc.o 00:01:38.241 CC lib/nvmf/rdma.o 00:01:38.241 CC lib/scsi/task.o 00:01:38.241 CC lib/nvmf/auth.o 00:01:38.241 CC lib/nvmf/mdns_server.o 00:01:38.806 LIB libspdk_nbd.a 00:01:38.806 SO libspdk_nbd.so.7.0 00:01:38.806 LIB libspdk_scsi.a 00:01:38.806 LIB libspdk_ublk.a 00:01:38.806 SYMLINK libspdk_nbd.so 00:01:38.806 SO libspdk_ublk.so.3.0 00:01:38.806 SO libspdk_scsi.so.9.0 00:01:38.806 SYMLINK libspdk_ublk.so 00:01:38.806 SYMLINK libspdk_scsi.so 00:01:39.063 LIB libspdk_ftl.a 00:01:39.063 SO libspdk_ftl.so.9.0 00:01:39.063 CC lib/vhost/vhost_rpc.o 00:01:39.063 CC lib/vhost/vhost_scsi.o 00:01:39.063 CC lib/vhost/vhost.o 00:01:39.063 CC lib/vhost/vhost_blk.o 00:01:39.063 CC lib/vhost/rte_vhost_user.o 00:01:39.063 CC lib/iscsi/init_grp.o 00:01:39.063 CC lib/iscsi/conn.o 00:01:39.063 CC lib/iscsi/iscsi.o 00:01:39.063 CC lib/iscsi/md5.o 00:01:39.063 CC lib/iscsi/param.o 00:01:39.063 CC lib/iscsi/tgt_node.o 00:01:39.063 CC lib/iscsi/iscsi_subsystem.o 00:01:39.063 CC lib/iscsi/portal_grp.o 00:01:39.063 CC lib/iscsi/iscsi_rpc.o 00:01:39.063 CC lib/iscsi/task.o 00:01:39.320 SYMLINK libspdk_ftl.so 00:01:39.886 LIB libspdk_nvmf.a 00:01:39.886 SO libspdk_nvmf.so.18.1 00:01:39.886 LIB libspdk_vhost.a 00:01:40.145 SO libspdk_vhost.so.8.0 00:01:40.145 SYMLINK libspdk_vhost.so 00:01:40.145 SYMLINK libspdk_nvmf.so 00:01:40.145 LIB libspdk_iscsi.a 00:01:40.145 SO libspdk_iscsi.so.8.0 00:01:40.402 SYMLINK libspdk_iscsi.so 00:01:40.659 CC module/env_dpdk/env_dpdk_rpc.o 00:01:40.917 CC module/vfu_device/vfu_virtio_blk.o 00:01:40.917 CC module/vfu_device/vfu_virtio.o 00:01:40.917 CC module/vfu_device/vfu_virtio_scsi.o 00:01:40.917 CC module/vfu_device/vfu_virtio_rpc.o 00:01:40.917 LIB libspdk_env_dpdk_rpc.a 00:01:40.917 CC module/blob/bdev/blob_bdev.o 00:01:40.917 CC module/accel/iaa/accel_iaa.o 00:01:40.917 CC module/accel/iaa/accel_iaa_rpc.o 00:01:40.917 CC module/accel/dsa/accel_dsa.o 00:01:40.917 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:40.917 CC module/accel/dsa/accel_dsa_rpc.o 00:01:40.917 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:40.917 CC module/accel/error/accel_error.o 00:01:40.917 CC module/accel/ioat/accel_ioat.o 00:01:40.917 CC module/accel/ioat/accel_ioat_rpc.o 00:01:40.917 CC module/accel/error/accel_error_rpc.o 00:01:40.917 CC module/scheduler/gscheduler/gscheduler.o 00:01:40.917 SO libspdk_env_dpdk_rpc.so.6.0 00:01:40.917 CC module/keyring/linux/keyring.o 00:01:40.917 CC module/keyring/file/keyring_rpc.o 00:01:40.917 CC module/keyring/linux/keyring_rpc.o 00:01:40.917 CC module/keyring/file/keyring.o 00:01:40.917 CC module/sock/posix/posix.o 00:01:40.917 SYMLINK libspdk_env_dpdk_rpc.so 00:01:41.174 LIB libspdk_keyring_file.a 00:01:41.174 LIB libspdk_keyring_linux.a 00:01:41.174 LIB libspdk_scheduler_gscheduler.a 00:01:41.174 LIB libspdk_scheduler_dpdk_governor.a 00:01:41.174 LIB libspdk_accel_error.a 00:01:41.174 LIB libspdk_accel_ioat.a 00:01:41.174 LIB libspdk_scheduler_dynamic.a 00:01:41.174 SO libspdk_keyring_file.so.1.0 00:01:41.174 SO libspdk_accel_error.so.2.0 00:01:41.174 SO libspdk_scheduler_gscheduler.so.4.0 00:01:41.174 SO libspdk_keyring_linux.so.1.0 00:01:41.174 LIB libspdk_accel_iaa.a 00:01:41.174 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:41.174 SO libspdk_accel_ioat.so.6.0 00:01:41.174 SO libspdk_scheduler_dynamic.so.4.0 00:01:41.174 SO libspdk_accel_iaa.so.3.0 00:01:41.174 LIB libspdk_accel_dsa.a 00:01:41.174 LIB libspdk_blob_bdev.a 00:01:41.174 SYMLINK libspdk_accel_error.so 00:01:41.174 SYMLINK libspdk_keyring_file.so 00:01:41.174 SYMLINK libspdk_scheduler_gscheduler.so 00:01:41.174 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:41.174 SO libspdk_blob_bdev.so.11.0 00:01:41.174 SYMLINK libspdk_accel_ioat.so 00:01:41.174 SYMLINK libspdk_keyring_linux.so 00:01:41.174 SO libspdk_accel_dsa.so.5.0 00:01:41.174 SYMLINK libspdk_scheduler_dynamic.so 00:01:41.174 SYMLINK libspdk_accel_iaa.so 00:01:41.174 SYMLINK libspdk_blob_bdev.so 00:01:41.174 SYMLINK libspdk_accel_dsa.so 00:01:41.430 LIB libspdk_vfu_device.a 00:01:41.430 SO libspdk_vfu_device.so.3.0 00:01:41.430 SYMLINK libspdk_vfu_device.so 00:01:41.430 LIB libspdk_sock_posix.a 00:01:41.689 SO libspdk_sock_posix.so.6.0 00:01:41.689 SYMLINK libspdk_sock_posix.so 00:01:41.689 CC module/bdev/delay/vbdev_delay.o 00:01:41.689 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:41.689 CC module/bdev/malloc/bdev_malloc.o 00:01:41.689 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:41.689 CC module/bdev/null/bdev_null.o 00:01:41.689 CC module/bdev/error/vbdev_error_rpc.o 00:01:41.689 CC module/bdev/error/vbdev_error.o 00:01:41.689 CC module/bdev/null/bdev_null_rpc.o 00:01:41.689 CC module/bdev/iscsi/bdev_iscsi.o 00:01:41.689 CC module/bdev/raid/bdev_raid.o 00:01:41.689 CC module/bdev/nvme/bdev_nvme.o 00:01:41.689 CC module/blobfs/bdev/blobfs_bdev.o 00:01:41.689 CC module/bdev/nvme/nvme_rpc.o 00:01:41.689 CC module/bdev/nvme/bdev_mdns_client.o 00:01:41.689 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:41.689 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:41.689 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:41.689 CC module/bdev/raid/bdev_raid_sb.o 00:01:41.689 CC module/bdev/raid/bdev_raid_rpc.o 00:01:41.689 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:41.689 CC module/bdev/passthru/vbdev_passthru.o 00:01:41.689 CC module/bdev/raid/raid0.o 00:01:41.689 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:41.689 CC module/bdev/nvme/vbdev_opal.o 00:01:41.689 CC module/bdev/gpt/gpt.o 00:01:41.689 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:41.689 CC module/bdev/raid/concat.o 00:01:41.689 CC module/bdev/gpt/vbdev_gpt.o 00:01:41.689 CC module/bdev/raid/raid1.o 00:01:41.689 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:41.689 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:41.689 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:41.689 CC module/bdev/split/vbdev_split.o 00:01:41.689 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:41.689 CC module/bdev/lvol/vbdev_lvol.o 00:01:41.689 CC module/bdev/split/vbdev_split_rpc.o 00:01:41.689 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:41.689 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:41.689 CC module/bdev/aio/bdev_aio_rpc.o 00:01:41.689 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:41.689 CC module/bdev/ftl/bdev_ftl.o 00:01:41.689 CC module/bdev/aio/bdev_aio.o 00:01:41.957 LIB libspdk_blobfs_bdev.a 00:01:41.957 LIB libspdk_bdev_null.a 00:01:41.957 SO libspdk_blobfs_bdev.so.6.0 00:01:41.957 LIB libspdk_bdev_split.a 00:01:41.957 LIB libspdk_bdev_error.a 00:01:41.957 SO libspdk_bdev_null.so.6.0 00:01:41.957 SO libspdk_bdev_error.so.6.0 00:01:41.957 LIB libspdk_bdev_gpt.a 00:01:41.957 SO libspdk_bdev_split.so.6.0 00:01:41.957 LIB libspdk_bdev_passthru.a 00:01:41.957 LIB libspdk_bdev_malloc.a 00:01:41.957 SO libspdk_bdev_gpt.so.6.0 00:01:41.957 SYMLINK libspdk_blobfs_bdev.so 00:01:41.957 SYMLINK libspdk_bdev_null.so 00:01:41.957 SO libspdk_bdev_passthru.so.6.0 00:01:41.957 LIB libspdk_bdev_ftl.a 00:01:41.957 SYMLINK libspdk_bdev_error.so 00:01:41.957 SYMLINK libspdk_bdev_split.so 00:01:41.957 SO libspdk_bdev_malloc.so.6.0 00:01:41.957 LIB libspdk_bdev_zone_block.a 00:01:41.957 LIB libspdk_bdev_delay.a 00:01:41.957 SO libspdk_bdev_ftl.so.6.0 00:01:42.216 LIB libspdk_bdev_aio.a 00:01:42.216 SO libspdk_bdev_zone_block.so.6.0 00:01:42.216 LIB libspdk_bdev_iscsi.a 00:01:42.216 SO libspdk_bdev_delay.so.6.0 00:01:42.216 SYMLINK libspdk_bdev_gpt.so 00:01:42.216 SYMLINK libspdk_bdev_passthru.so 00:01:42.216 SYMLINK libspdk_bdev_malloc.so 00:01:42.216 SO libspdk_bdev_iscsi.so.6.0 00:01:42.216 SO libspdk_bdev_aio.so.6.0 00:01:42.216 SYMLINK libspdk_bdev_ftl.so 00:01:42.216 SYMLINK libspdk_bdev_delay.so 00:01:42.216 SYMLINK libspdk_bdev_zone_block.so 00:01:42.216 LIB libspdk_bdev_lvol.a 00:01:42.216 SYMLINK libspdk_bdev_iscsi.so 00:01:42.216 SYMLINK libspdk_bdev_aio.so 00:01:42.216 LIB libspdk_bdev_virtio.a 00:01:42.216 SO libspdk_bdev_lvol.so.6.0 00:01:42.216 SO libspdk_bdev_virtio.so.6.0 00:01:42.216 SYMLINK libspdk_bdev_lvol.so 00:01:42.216 SYMLINK libspdk_bdev_virtio.so 00:01:42.474 LIB libspdk_bdev_raid.a 00:01:42.474 SO libspdk_bdev_raid.so.6.0 00:01:42.731 SYMLINK libspdk_bdev_raid.so 00:01:43.298 LIB libspdk_bdev_nvme.a 00:01:43.298 SO libspdk_bdev_nvme.so.7.0 00:01:43.557 SYMLINK libspdk_bdev_nvme.so 00:01:44.124 CC module/event/subsystems/vmd/vmd.o 00:01:44.124 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:44.124 CC module/event/subsystems/iobuf/iobuf.o 00:01:44.124 CC module/event/subsystems/keyring/keyring.o 00:01:44.124 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:44.124 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:44.124 CC module/event/subsystems/sock/sock.o 00:01:44.124 CC module/event/subsystems/scheduler/scheduler.o 00:01:44.124 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:44.124 LIB libspdk_event_keyring.a 00:01:44.124 LIB libspdk_event_vmd.a 00:01:44.124 LIB libspdk_event_scheduler.a 00:01:44.124 LIB libspdk_event_iobuf.a 00:01:44.124 LIB libspdk_event_vhost_blk.a 00:01:44.124 LIB libspdk_event_sock.a 00:01:44.124 SO libspdk_event_keyring.so.1.0 00:01:44.124 LIB libspdk_event_vfu_tgt.a 00:01:44.124 SO libspdk_event_vhost_blk.so.3.0 00:01:44.124 SO libspdk_event_vmd.so.6.0 00:01:44.124 SO libspdk_event_scheduler.so.4.0 00:01:44.124 SO libspdk_event_iobuf.so.3.0 00:01:44.124 SO libspdk_event_sock.so.5.0 00:01:44.124 SO libspdk_event_vfu_tgt.so.3.0 00:01:44.386 SYMLINK libspdk_event_keyring.so 00:01:44.386 SYMLINK libspdk_event_scheduler.so 00:01:44.386 SYMLINK libspdk_event_vhost_blk.so 00:01:44.386 SYMLINK libspdk_event_iobuf.so 00:01:44.386 SYMLINK libspdk_event_sock.so 00:01:44.386 SYMLINK libspdk_event_vmd.so 00:01:44.386 SYMLINK libspdk_event_vfu_tgt.so 00:01:44.646 CC module/event/subsystems/accel/accel.o 00:01:44.646 LIB libspdk_event_accel.a 00:01:44.646 SO libspdk_event_accel.so.6.0 00:01:44.905 SYMLINK libspdk_event_accel.so 00:01:45.164 CC module/event/subsystems/bdev/bdev.o 00:01:45.164 LIB libspdk_event_bdev.a 00:01:45.424 SO libspdk_event_bdev.so.6.0 00:01:45.424 SYMLINK libspdk_event_bdev.so 00:01:45.682 CC module/event/subsystems/scsi/scsi.o 00:01:45.682 CC module/event/subsystems/ublk/ublk.o 00:01:45.683 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:45.683 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:45.683 CC module/event/subsystems/nbd/nbd.o 00:01:45.683 LIB libspdk_event_scsi.a 00:01:45.683 LIB libspdk_event_nbd.a 00:01:45.683 LIB libspdk_event_ublk.a 00:01:45.683 SO libspdk_event_scsi.so.6.0 00:01:45.942 SO libspdk_event_nbd.so.6.0 00:01:45.942 SO libspdk_event_ublk.so.3.0 00:01:45.942 LIB libspdk_event_nvmf.a 00:01:45.942 SYMLINK libspdk_event_scsi.so 00:01:45.942 SO libspdk_event_nvmf.so.6.0 00:01:45.942 SYMLINK libspdk_event_nbd.so 00:01:45.942 SYMLINK libspdk_event_ublk.so 00:01:45.942 SYMLINK libspdk_event_nvmf.so 00:01:46.201 CC module/event/subsystems/iscsi/iscsi.o 00:01:46.201 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:46.201 LIB libspdk_event_iscsi.a 00:01:46.201 LIB libspdk_event_vhost_scsi.a 00:01:46.201 SO libspdk_event_vhost_scsi.so.3.0 00:01:46.201 SO libspdk_event_iscsi.so.6.0 00:01:46.460 SYMLINK libspdk_event_vhost_scsi.so 00:01:46.460 SYMLINK libspdk_event_iscsi.so 00:01:46.460 SO libspdk.so.6.0 00:01:46.460 SYMLINK libspdk.so 00:01:47.047 CC app/trace_record/trace_record.o 00:01:47.048 CC app/spdk_lspci/spdk_lspci.o 00:01:47.048 CXX app/trace/trace.o 00:01:47.048 CC app/spdk_nvme_identify/identify.o 00:01:47.048 CC app/spdk_nvme_discover/discovery_aer.o 00:01:47.048 CC test/rpc_client/rpc_client_test.o 00:01:47.048 CC app/spdk_top/spdk_top.o 00:01:47.048 CC app/spdk_nvme_perf/perf.o 00:01:47.048 TEST_HEADER include/spdk/accel_module.h 00:01:47.048 TEST_HEADER include/spdk/accel.h 00:01:47.048 TEST_HEADER include/spdk/assert.h 00:01:47.048 TEST_HEADER include/spdk/base64.h 00:01:47.048 TEST_HEADER include/spdk/barrier.h 00:01:47.048 TEST_HEADER include/spdk/bdev.h 00:01:47.048 TEST_HEADER include/spdk/bdev_module.h 00:01:47.048 TEST_HEADER include/spdk/bit_array.h 00:01:47.048 TEST_HEADER include/spdk/bdev_zone.h 00:01:47.048 TEST_HEADER include/spdk/blob_bdev.h 00:01:47.048 TEST_HEADER include/spdk/bit_pool.h 00:01:47.048 TEST_HEADER include/spdk/blobfs.h 00:01:47.048 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:47.048 TEST_HEADER include/spdk/blob.h 00:01:47.048 CC app/nvmf_tgt/nvmf_main.o 00:01:47.048 TEST_HEADER include/spdk/config.h 00:01:47.048 TEST_HEADER include/spdk/conf.h 00:01:47.048 TEST_HEADER include/spdk/cpuset.h 00:01:47.048 TEST_HEADER include/spdk/crc16.h 00:01:47.048 TEST_HEADER include/spdk/crc32.h 00:01:47.048 TEST_HEADER include/spdk/crc64.h 00:01:47.048 TEST_HEADER include/spdk/dif.h 00:01:47.048 TEST_HEADER include/spdk/env_dpdk.h 00:01:47.048 TEST_HEADER include/spdk/dma.h 00:01:47.048 TEST_HEADER include/spdk/event.h 00:01:47.048 TEST_HEADER include/spdk/endian.h 00:01:47.048 TEST_HEADER include/spdk/fd_group.h 00:01:47.048 TEST_HEADER include/spdk/env.h 00:01:47.048 TEST_HEADER include/spdk/file.h 00:01:47.048 TEST_HEADER include/spdk/ftl.h 00:01:47.048 TEST_HEADER include/spdk/fd.h 00:01:47.048 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:47.048 TEST_HEADER include/spdk/hexlify.h 00:01:47.048 TEST_HEADER include/spdk/gpt_spec.h 00:01:47.048 TEST_HEADER include/spdk/idxd.h 00:01:47.048 TEST_HEADER include/spdk/histogram_data.h 00:01:47.048 TEST_HEADER include/spdk/idxd_spec.h 00:01:47.048 TEST_HEADER include/spdk/init.h 00:01:47.048 TEST_HEADER include/spdk/ioat.h 00:01:47.048 TEST_HEADER include/spdk/iscsi_spec.h 00:01:47.048 TEST_HEADER include/spdk/json.h 00:01:47.048 TEST_HEADER include/spdk/ioat_spec.h 00:01:47.048 TEST_HEADER include/spdk/keyring.h 00:01:47.048 TEST_HEADER include/spdk/keyring_module.h 00:01:47.048 TEST_HEADER include/spdk/jsonrpc.h 00:01:47.048 TEST_HEADER include/spdk/likely.h 00:01:47.048 TEST_HEADER include/spdk/mmio.h 00:01:47.048 TEST_HEADER include/spdk/log.h 00:01:47.048 TEST_HEADER include/spdk/memory.h 00:01:47.048 TEST_HEADER include/spdk/lvol.h 00:01:47.048 TEST_HEADER include/spdk/notify.h 00:01:47.048 TEST_HEADER include/spdk/nbd.h 00:01:47.048 TEST_HEADER include/spdk/nvme.h 00:01:47.048 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:47.048 TEST_HEADER include/spdk/nvme_intel.h 00:01:47.048 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:47.048 TEST_HEADER include/spdk/nvme_spec.h 00:01:47.048 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:47.048 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:47.048 TEST_HEADER include/spdk/nvme_zns.h 00:01:47.048 CC app/iscsi_tgt/iscsi_tgt.o 00:01:47.048 TEST_HEADER include/spdk/nvmf.h 00:01:47.048 CC app/spdk_tgt/spdk_tgt.o 00:01:47.048 CC app/spdk_dd/spdk_dd.o 00:01:47.048 TEST_HEADER include/spdk/nvmf_transport.h 00:01:47.048 TEST_HEADER include/spdk/nvmf_spec.h 00:01:47.048 TEST_HEADER include/spdk/opal.h 00:01:47.048 TEST_HEADER include/spdk/opal_spec.h 00:01:47.048 TEST_HEADER include/spdk/pipe.h 00:01:47.048 TEST_HEADER include/spdk/pci_ids.h 00:01:47.048 TEST_HEADER include/spdk/queue.h 00:01:47.048 TEST_HEADER include/spdk/reduce.h 00:01:47.048 TEST_HEADER include/spdk/rpc.h 00:01:47.048 TEST_HEADER include/spdk/scsi.h 00:01:47.048 TEST_HEADER include/spdk/scsi_spec.h 00:01:47.048 TEST_HEADER include/spdk/scheduler.h 00:01:47.048 TEST_HEADER include/spdk/sock.h 00:01:47.048 TEST_HEADER include/spdk/stdinc.h 00:01:47.048 TEST_HEADER include/spdk/string.h 00:01:47.048 TEST_HEADER include/spdk/thread.h 00:01:47.048 TEST_HEADER include/spdk/trace_parser.h 00:01:47.048 TEST_HEADER include/spdk/trace.h 00:01:47.048 TEST_HEADER include/spdk/tree.h 00:01:47.048 TEST_HEADER include/spdk/util.h 00:01:47.048 TEST_HEADER include/spdk/ublk.h 00:01:47.048 TEST_HEADER include/spdk/uuid.h 00:01:47.048 TEST_HEADER include/spdk/version.h 00:01:47.048 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:47.048 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:47.048 TEST_HEADER include/spdk/vhost.h 00:01:47.048 TEST_HEADER include/spdk/xor.h 00:01:47.048 TEST_HEADER include/spdk/zipf.h 00:01:47.048 TEST_HEADER include/spdk/vmd.h 00:01:47.048 CXX test/cpp_headers/accel.o 00:01:47.048 CXX test/cpp_headers/accel_module.o 00:01:47.048 CXX test/cpp_headers/assert.o 00:01:47.048 CXX test/cpp_headers/barrier.o 00:01:47.048 CXX test/cpp_headers/base64.o 00:01:47.048 CXX test/cpp_headers/bdev_module.o 00:01:47.048 CXX test/cpp_headers/bdev.o 00:01:47.048 CXX test/cpp_headers/bdev_zone.o 00:01:47.048 CXX test/cpp_headers/bit_array.o 00:01:47.048 CXX test/cpp_headers/bit_pool.o 00:01:47.048 CXX test/cpp_headers/blob_bdev.o 00:01:47.048 CXX test/cpp_headers/blobfs_bdev.o 00:01:47.048 CXX test/cpp_headers/blobfs.o 00:01:47.048 CXX test/cpp_headers/blob.o 00:01:47.048 CXX test/cpp_headers/conf.o 00:01:47.048 CXX test/cpp_headers/config.o 00:01:47.048 CXX test/cpp_headers/cpuset.o 00:01:47.048 CXX test/cpp_headers/crc16.o 00:01:47.048 CXX test/cpp_headers/crc64.o 00:01:47.048 CXX test/cpp_headers/crc32.o 00:01:47.048 CXX test/cpp_headers/dif.o 00:01:47.048 CXX test/cpp_headers/dma.o 00:01:47.048 CXX test/cpp_headers/env_dpdk.o 00:01:47.048 CXX test/cpp_headers/endian.o 00:01:47.048 CXX test/cpp_headers/env.o 00:01:47.048 CXX test/cpp_headers/event.o 00:01:47.048 CXX test/cpp_headers/fd_group.o 00:01:47.048 CXX test/cpp_headers/fd.o 00:01:47.048 CXX test/cpp_headers/file.o 00:01:47.048 CXX test/cpp_headers/gpt_spec.o 00:01:47.048 CXX test/cpp_headers/ftl.o 00:01:47.048 CXX test/cpp_headers/hexlify.o 00:01:47.048 CXX test/cpp_headers/histogram_data.o 00:01:47.048 CXX test/cpp_headers/idxd.o 00:01:47.048 CXX test/cpp_headers/idxd_spec.o 00:01:47.048 CXX test/cpp_headers/init.o 00:01:47.048 CXX test/cpp_headers/ioat.o 00:01:47.048 CXX test/cpp_headers/ioat_spec.o 00:01:47.048 CXX test/cpp_headers/json.o 00:01:47.048 CXX test/cpp_headers/iscsi_spec.o 00:01:47.048 CXX test/cpp_headers/jsonrpc.o 00:01:47.048 CXX test/cpp_headers/keyring.o 00:01:47.048 CXX test/cpp_headers/keyring_module.o 00:01:47.048 CXX test/cpp_headers/log.o 00:01:47.048 CXX test/cpp_headers/likely.o 00:01:47.048 CXX test/cpp_headers/lvol.o 00:01:47.048 CXX test/cpp_headers/memory.o 00:01:47.048 CXX test/cpp_headers/notify.o 00:01:47.048 CXX test/cpp_headers/mmio.o 00:01:47.048 CXX test/cpp_headers/nbd.o 00:01:47.048 CXX test/cpp_headers/nvme.o 00:01:47.048 CXX test/cpp_headers/nvme_intel.o 00:01:47.048 CXX test/cpp_headers/nvme_ocssd.o 00:01:47.048 CXX test/cpp_headers/nvme_spec.o 00:01:47.048 CXX test/cpp_headers/nvme_zns.o 00:01:47.048 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:47.048 CXX test/cpp_headers/nvmf_cmd.o 00:01:47.048 CXX test/cpp_headers/nvmf_spec.o 00:01:47.048 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:47.048 CXX test/cpp_headers/nvmf.o 00:01:47.048 CXX test/cpp_headers/nvmf_transport.o 00:01:47.048 CXX test/cpp_headers/opal.o 00:01:47.048 CXX test/cpp_headers/opal_spec.o 00:01:47.048 CXX test/cpp_headers/pci_ids.o 00:01:47.048 CXX test/cpp_headers/pipe.o 00:01:47.048 CXX test/cpp_headers/queue.o 00:01:47.048 CXX test/cpp_headers/reduce.o 00:01:47.048 CC app/fio/nvme/fio_plugin.o 00:01:47.048 CC test/env/vtophys/vtophys.o 00:01:47.048 CC test/thread/poller_perf/poller_perf.o 00:01:47.048 CC test/env/pci/pci_ut.o 00:01:47.048 CC examples/ioat/verify/verify.o 00:01:47.048 CC test/app/histogram_perf/histogram_perf.o 00:01:47.048 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:47.048 CC examples/util/zipf/zipf.o 00:01:47.048 CC test/env/memory/memory_ut.o 00:01:47.048 CC test/app/jsoncat/jsoncat.o 00:01:47.048 CXX test/cpp_headers/rpc.o 00:01:47.048 CC examples/ioat/perf/perf.o 00:01:47.048 CC test/app/stub/stub.o 00:01:47.048 CC app/fio/bdev/fio_plugin.o 00:01:47.048 CC test/app/bdev_svc/bdev_svc.o 00:01:47.313 CC test/dma/test_dma/test_dma.o 00:01:47.313 LINK spdk_lspci 00:01:47.314 LINK rpc_client_test 00:01:47.314 LINK spdk_nvme_discover 00:01:47.314 LINK nvmf_tgt 00:01:47.314 LINK spdk_trace_record 00:01:47.581 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:47.581 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:47.581 LINK iscsi_tgt 00:01:47.581 CC test/env/mem_callbacks/mem_callbacks.o 00:01:47.581 LINK interrupt_tgt 00:01:47.581 LINK histogram_perf 00:01:47.581 LINK poller_perf 00:01:47.581 LINK zipf 00:01:47.581 CXX test/cpp_headers/scheduler.o 00:01:47.581 CXX test/cpp_headers/scsi.o 00:01:47.581 CXX test/cpp_headers/scsi_spec.o 00:01:47.581 CXX test/cpp_headers/sock.o 00:01:47.581 CXX test/cpp_headers/stdinc.o 00:01:47.581 CXX test/cpp_headers/string.o 00:01:47.581 CXX test/cpp_headers/thread.o 00:01:47.581 CXX test/cpp_headers/trace.o 00:01:47.581 CXX test/cpp_headers/trace_parser.o 00:01:47.581 CXX test/cpp_headers/tree.o 00:01:47.581 CXX test/cpp_headers/ublk.o 00:01:47.581 CXX test/cpp_headers/util.o 00:01:47.581 CXX test/cpp_headers/uuid.o 00:01:47.581 CXX test/cpp_headers/version.o 00:01:47.581 CXX test/cpp_headers/vfio_user_pci.o 00:01:47.581 CXX test/cpp_headers/vfio_user_spec.o 00:01:47.581 CXX test/cpp_headers/vhost.o 00:01:47.581 CXX test/cpp_headers/vmd.o 00:01:47.581 CXX test/cpp_headers/xor.o 00:01:47.581 CXX test/cpp_headers/zipf.o 00:01:47.581 LINK spdk_tgt 00:01:47.581 LINK vtophys 00:01:47.581 LINK jsoncat 00:01:47.581 LINK ioat_perf 00:01:47.581 LINK env_dpdk_post_init 00:01:47.581 LINK stub 00:01:47.581 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:47.581 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:47.840 LINK spdk_dd 00:01:47.840 LINK bdev_svc 00:01:47.840 LINK verify 00:01:47.840 LINK spdk_trace 00:01:47.840 LINK pci_ut 00:01:47.840 LINK test_dma 00:01:47.840 LINK nvme_fuzz 00:01:47.840 LINK spdk_bdev 00:01:48.098 CC test/event/reactor_perf/reactor_perf.o 00:01:48.098 CC test/event/event_perf/event_perf.o 00:01:48.098 CC examples/vmd/lsvmd/lsvmd.o 00:01:48.098 CC test/event/reactor/reactor.o 00:01:48.098 CC test/event/app_repeat/app_repeat.o 00:01:48.098 CC examples/idxd/perf/perf.o 00:01:48.098 CC test/event/scheduler/scheduler.o 00:01:48.098 CC examples/vmd/led/led.o 00:01:48.098 LINK spdk_nvme 00:01:48.098 CC examples/sock/hello_world/hello_sock.o 00:01:48.098 CC examples/thread/thread/thread_ex.o 00:01:48.098 LINK spdk_top 00:01:48.098 LINK spdk_nvme_identify 00:01:48.098 LINK vhost_fuzz 00:01:48.098 LINK spdk_nvme_perf 00:01:48.098 CC app/vhost/vhost.o 00:01:48.098 LINK reactor_perf 00:01:48.098 LINK mem_callbacks 00:01:48.098 LINK reactor 00:01:48.098 LINK event_perf 00:01:48.098 LINK led 00:01:48.098 LINK lsvmd 00:01:48.098 LINK app_repeat 00:01:48.357 LINK scheduler 00:01:48.357 LINK hello_sock 00:01:48.357 LINK thread 00:01:48.357 LINK idxd_perf 00:01:48.357 LINK vhost 00:01:48.357 CC test/nvme/reset/reset.o 00:01:48.357 CC test/nvme/connect_stress/connect_stress.o 00:01:48.357 CC test/nvme/err_injection/err_injection.o 00:01:48.357 CC test/nvme/reserve/reserve.o 00:01:48.357 CC test/nvme/fused_ordering/fused_ordering.o 00:01:48.357 CC test/nvme/overhead/overhead.o 00:01:48.357 CC test/nvme/e2edp/nvme_dp.o 00:01:48.357 CC test/nvme/simple_copy/simple_copy.o 00:01:48.357 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:48.357 CC test/nvme/sgl/sgl.o 00:01:48.357 CC test/nvme/aer/aer.o 00:01:48.357 CC test/nvme/fdp/fdp.o 00:01:48.357 CC test/nvme/compliance/nvme_compliance.o 00:01:48.357 CC test/nvme/startup/startup.o 00:01:48.357 CC test/nvme/cuse/cuse.o 00:01:48.357 CC test/accel/dif/dif.o 00:01:48.357 CC test/nvme/boot_partition/boot_partition.o 00:01:48.357 CC test/blobfs/mkfs/mkfs.o 00:01:48.357 LINK memory_ut 00:01:48.357 CC test/lvol/esnap/esnap.o 00:01:48.615 LINK connect_stress 00:01:48.615 LINK err_injection 00:01:48.615 LINK startup 00:01:48.615 LINK doorbell_aers 00:01:48.615 LINK boot_partition 00:01:48.615 LINK reserve 00:01:48.615 LINK fused_ordering 00:01:48.615 LINK reset 00:01:48.615 LINK simple_copy 00:01:48.615 LINK sgl 00:01:48.615 LINK nvme_dp 00:01:48.615 LINK overhead 00:01:48.615 LINK aer 00:01:48.615 LINK nvme_compliance 00:01:48.615 LINK mkfs 00:01:48.615 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:48.615 LINK fdp 00:01:48.615 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:48.615 CC examples/nvme/reconnect/reconnect.o 00:01:48.615 CC examples/nvme/abort/abort.o 00:01:48.615 CC examples/nvme/hello_world/hello_world.o 00:01:48.615 CC examples/nvme/hotplug/hotplug.o 00:01:48.615 CC examples/nvme/arbitration/arbitration.o 00:01:48.615 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:48.615 CC examples/accel/perf/accel_perf.o 00:01:48.874 LINK dif 00:01:48.874 CC examples/blob/cli/blobcli.o 00:01:48.874 CC examples/blob/hello_world/hello_blob.o 00:01:48.874 LINK pmr_persistence 00:01:48.874 LINK iscsi_fuzz 00:01:48.874 LINK cmb_copy 00:01:48.874 LINK hotplug 00:01:48.874 LINK hello_world 00:01:48.874 LINK reconnect 00:01:48.874 LINK arbitration 00:01:48.874 LINK abort 00:01:49.132 LINK hello_blob 00:01:49.132 LINK nvme_manage 00:01:49.132 LINK accel_perf 00:01:49.132 LINK blobcli 00:01:49.132 CC test/bdev/bdevio/bdevio.o 00:01:49.391 LINK cuse 00:01:49.650 CC examples/bdev/hello_world/hello_bdev.o 00:01:49.650 CC examples/bdev/bdevperf/bdevperf.o 00:01:49.650 LINK bdevio 00:01:49.650 LINK hello_bdev 00:01:50.218 LINK bdevperf 00:01:50.786 CC examples/nvmf/nvmf/nvmf.o 00:01:50.786 LINK nvmf 00:01:52.166 LINK esnap 00:01:52.166 00:01:52.166 real 0m43.005s 00:01:52.166 user 6m30.246s 00:01:52.166 sys 3m19.980s 00:01:52.166 18:27:08 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:52.166 18:27:08 make -- common/autotest_common.sh@10 -- $ set +x 00:01:52.166 ************************************ 00:01:52.166 END TEST make 00:01:52.166 ************************************ 00:01:52.425 18:27:08 -- common/autotest_common.sh@1142 -- $ return 0 00:01:52.425 18:27:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:52.425 18:27:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:52.425 18:27:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:52.425 18:27:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.425 18:27:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:52.425 18:27:08 -- pm/common@44 -- $ pid=801816 00:01:52.425 18:27:08 -- pm/common@50 -- $ kill -TERM 801816 00:01:52.425 18:27:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.425 18:27:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:52.425 18:27:08 -- pm/common@44 -- $ pid=801817 00:01:52.425 18:27:08 -- pm/common@50 -- $ kill -TERM 801817 00:01:52.425 18:27:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.425 18:27:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:52.425 18:27:08 -- pm/common@44 -- $ pid=801819 00:01:52.425 18:27:08 -- pm/common@50 -- $ kill -TERM 801819 00:01:52.425 18:27:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.425 18:27:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:52.425 18:27:08 -- pm/common@44 -- $ pid=801843 00:01:52.425 18:27:08 -- pm/common@50 -- $ sudo -E kill -TERM 801843 00:01:52.425 18:27:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:52.425 18:27:08 -- nvmf/common.sh@7 -- # uname -s 00:01:52.425 18:27:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:52.425 18:27:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:52.425 18:27:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:52.425 18:27:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:52.425 18:27:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:52.425 18:27:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:52.425 18:27:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:52.425 18:27:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:52.425 18:27:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:52.425 18:27:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:52.425 18:27:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:52.425 18:27:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:52.425 18:27:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:52.425 18:27:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:52.425 18:27:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:52.425 18:27:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:52.425 18:27:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:52.425 18:27:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:52.425 18:27:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.425 18:27:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.425 18:27:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.425 18:27:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.425 18:27:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.425 18:27:09 -- paths/export.sh@5 -- # export PATH 00:01:52.425 18:27:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.425 18:27:09 -- nvmf/common.sh@47 -- # : 0 00:01:52.425 18:27:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:52.425 18:27:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:52.425 18:27:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:52.425 18:27:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:52.426 18:27:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:52.426 18:27:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:52.426 18:27:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:52.426 18:27:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:52.426 18:27:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:52.426 18:27:09 -- spdk/autotest.sh@32 -- # uname -s 00:01:52.426 18:27:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:52.426 18:27:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:52.426 18:27:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:52.426 18:27:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:52.426 18:27:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:52.426 18:27:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:52.426 18:27:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:52.426 18:27:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:52.426 18:27:09 -- spdk/autotest.sh@48 -- # udevadm_pid=860858 00:01:52.426 18:27:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:52.426 18:27:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:52.426 18:27:09 -- pm/common@17 -- # local monitor 00:01:52.426 18:27:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.426 18:27:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.426 18:27:09 -- pm/common@21 -- # date +%s 00:01:52.426 18:27:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.426 18:27:09 -- pm/common@21 -- # date +%s 00:01:52.426 18:27:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.426 18:27:09 -- pm/common@25 -- # sleep 1 00:01:52.426 18:27:09 -- pm/common@21 -- # date +%s 00:01:52.426 18:27:09 -- pm/common@21 -- # date +%s 00:01:52.426 18:27:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060829 00:01:52.426 18:27:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060829 00:01:52.426 18:27:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060829 00:01:52.426 18:27:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721060829 00:01:52.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060829_collect-vmstat.pm.log 00:01:52.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060829_collect-cpu-load.pm.log 00:01:52.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060829_collect-cpu-temp.pm.log 00:01:52.426 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721060829_collect-bmc-pm.bmc.pm.log 00:01:53.364 18:27:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:53.364 18:27:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:53.364 18:27:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:53.364 18:27:10 -- common/autotest_common.sh@10 -- # set +x 00:01:53.364 18:27:10 -- spdk/autotest.sh@59 -- # create_test_list 00:01:53.364 18:27:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:53.364 18:27:10 -- common/autotest_common.sh@10 -- # set +x 00:01:53.624 18:27:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:53.624 18:27:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.624 18:27:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.624 18:27:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:53.624 18:27:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.624 18:27:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:53.624 18:27:10 -- common/autotest_common.sh@1455 -- # uname 00:01:53.624 18:27:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:53.624 18:27:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:53.624 18:27:10 -- common/autotest_common.sh@1475 -- # uname 00:01:53.624 18:27:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:53.624 18:27:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:53.624 18:27:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:53.624 18:27:10 -- spdk/autotest.sh@72 -- # hash lcov 00:01:53.624 18:27:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:53.624 18:27:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:53.624 --rc lcov_branch_coverage=1 00:01:53.624 --rc lcov_function_coverage=1 00:01:53.624 --rc genhtml_branch_coverage=1 00:01:53.624 --rc genhtml_function_coverage=1 00:01:53.624 --rc genhtml_legend=1 00:01:53.624 --rc geninfo_all_blocks=1 00:01:53.624 ' 00:01:53.624 18:27:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:53.624 --rc lcov_branch_coverage=1 00:01:53.624 --rc lcov_function_coverage=1 00:01:53.624 --rc genhtml_branch_coverage=1 00:01:53.624 --rc genhtml_function_coverage=1 00:01:53.624 --rc genhtml_legend=1 00:01:53.624 --rc geninfo_all_blocks=1 00:01:53.624 ' 00:01:53.624 18:27:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:53.624 --rc lcov_branch_coverage=1 00:01:53.624 --rc lcov_function_coverage=1 00:01:53.624 --rc genhtml_branch_coverage=1 00:01:53.624 --rc genhtml_function_coverage=1 00:01:53.624 --rc genhtml_legend=1 00:01:53.624 --rc geninfo_all_blocks=1 00:01:53.624 --no-external' 00:01:53.624 18:27:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:53.624 --rc lcov_branch_coverage=1 00:01:53.624 --rc lcov_function_coverage=1 00:01:53.624 --rc genhtml_branch_coverage=1 00:01:53.624 --rc genhtml_function_coverage=1 00:01:53.624 --rc genhtml_legend=1 00:01:53.624 --rc geninfo_all_blocks=1 00:01:53.624 --no-external' 00:01:53.624 18:27:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:53.624 lcov: LCOV version 1.14 00:01:53.624 18:27:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:55.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:55.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:55.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:55.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:55.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:55.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:55.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:55.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:55.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:55.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:55.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:55.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:55.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:55.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:55.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:05.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:05.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:17.791 18:27:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:17.791 18:27:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:17.791 18:27:34 -- common/autotest_common.sh@10 -- # set +x 00:02:17.791 18:27:34 -- spdk/autotest.sh@91 -- # rm -f 00:02:17.791 18:27:34 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.328 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:20.328 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:20.328 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:20.587 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:20.846 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:20.846 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:20.846 18:27:37 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:20.846 18:27:37 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:20.846 18:27:37 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:20.846 18:27:37 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:20.846 18:27:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:20.846 18:27:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:20.846 18:27:37 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:20.846 18:27:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:20.846 18:27:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:20.846 18:27:37 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:20.846 18:27:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:20.846 18:27:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:20.846 18:27:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:20.846 18:27:37 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:20.846 18:27:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:20.846 No valid GPT data, bailing 00:02:20.846 18:27:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:20.846 18:27:37 -- scripts/common.sh@391 -- # pt= 00:02:20.846 18:27:37 -- scripts/common.sh@392 -- # return 1 00:02:20.847 18:27:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:20.847 1+0 records in 00:02:20.847 1+0 records out 00:02:20.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00161189 s, 651 MB/s 00:02:20.847 18:27:37 -- spdk/autotest.sh@118 -- # sync 00:02:20.847 18:27:37 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:20.847 18:27:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:20.847 18:27:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:26.126 18:27:42 -- spdk/autotest.sh@124 -- # uname -s 00:02:26.126 18:27:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:26.126 18:27:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.126 18:27:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:26.126 18:27:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.126 18:27:42 -- common/autotest_common.sh@10 -- # set +x 00:02:26.126 ************************************ 00:02:26.126 START TEST setup.sh 00:02:26.127 ************************************ 00:02:26.127 18:27:42 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.127 * Looking for test storage... 00:02:26.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:26.127 18:27:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:26.127 18:27:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:26.127 18:27:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:26.127 18:27:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:26.127 18:27:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.127 18:27:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:26.127 ************************************ 00:02:26.127 START TEST acl 00:02:26.127 ************************************ 00:02:26.127 18:27:42 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:26.386 * Looking for test storage... 00:02:26.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:26.386 18:27:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:26.386 18:27:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:26.386 18:27:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:26.386 18:27:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:26.386 18:27:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:26.386 18:27:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:26.386 18:27:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:26.386 18:27:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.386 18:27:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.701 18:27:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:29.701 18:27:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:29.701 18:27:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.701 18:27:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:29.701 18:27:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.701 18:27:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:31.609 Hugepages 00:02:31.609 node hugesize free / total 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:31.609 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 00:02:31.610 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:31.610 18:27:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:31.610 18:27:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:31.610 18:27:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:31.610 18:27:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:31.610 ************************************ 00:02:31.610 START TEST denied 00:02:31.610 ************************************ 00:02:31.610 18:27:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:31.610 18:27:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:31.610 18:27:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:31.610 18:27:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:31.610 18:27:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.610 18:27:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:34.900 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:34.900 18:27:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.188 00:02:38.188 real 0m6.294s 00:02:38.188 user 0m1.998s 00:02:38.188 sys 0m3.581s 00:02:38.188 18:27:54 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:38.188 18:27:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:38.188 ************************************ 00:02:38.188 END TEST denied 00:02:38.188 ************************************ 00:02:38.188 18:27:54 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:38.188 18:27:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:38.188 18:27:54 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.188 18:27:54 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.188 18:27:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:38.188 ************************************ 00:02:38.188 START TEST allowed 00:02:38.188 ************************************ 00:02:38.188 18:27:54 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:38.188 18:27:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:38.188 18:27:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:38.188 18:27:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:38.188 18:27:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.188 18:27:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:41.480 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:41.480 18:27:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:41.480 18:27:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:41.480 18:27:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:41.480 18:27:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:41.480 18:27:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.775 00:02:44.775 real 0m6.214s 00:02:44.775 user 0m1.782s 00:02:44.775 sys 0m3.515s 00:02:44.775 18:28:00 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.775 18:28:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:44.775 ************************************ 00:02:44.775 END TEST allowed 00:02:44.775 ************************************ 00:02:44.775 18:28:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:44.775 00:02:44.775 real 0m18.058s 00:02:44.775 user 0m5.862s 00:02:44.775 sys 0m10.682s 00:02:44.775 18:28:00 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.775 18:28:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:44.775 ************************************ 00:02:44.775 END TEST acl 00:02:44.775 ************************************ 00:02:44.775 18:28:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:44.775 18:28:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.775 18:28:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.775 18:28:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.775 18:28:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:44.775 ************************************ 00:02:44.775 START TEST hugepages 00:02:44.775 ************************************ 00:02:44.775 18:28:00 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.775 * Looking for test storage... 00:02:44.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173348108 kB' 'MemAvailable: 176219588 kB' 'Buffers: 3896 kB' 'Cached: 10141260 kB' 'SwapCached: 0 kB' 'Active: 7154456 kB' 'Inactive: 3507524 kB' 'Active(anon): 6762448 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520144 kB' 'Mapped: 194684 kB' 'Shmem: 6245624 kB' 'KReclaimable: 232796 kB' 'Slab: 816232 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 583436 kB' 'KernelStack: 20736 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8298408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.775 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:44.776 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:44.777 18:28:01 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:44.777 18:28:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.777 18:28:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.777 18:28:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:44.777 ************************************ 00:02:44.777 START TEST default_setup 00:02:44.777 ************************************ 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.777 18:28:01 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.375 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.375 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.375 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.375 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.375 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.375 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.376 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:47.945 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175505980 kB' 'MemAvailable: 178377460 kB' 'Buffers: 3896 kB' 'Cached: 10141364 kB' 'SwapCached: 0 kB' 'Active: 7167500 kB' 'Inactive: 3507524 kB' 'Active(anon): 6775492 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533072 kB' 'Mapped: 194576 kB' 'Shmem: 6245728 kB' 'KReclaimable: 232796 kB' 'Slab: 814388 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581592 kB' 'KernelStack: 20528 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8317412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315404 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.210 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.211 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175510360 kB' 'MemAvailable: 178381840 kB' 'Buffers: 3896 kB' 'Cached: 10141384 kB' 'SwapCached: 0 kB' 'Active: 7167784 kB' 'Inactive: 3507524 kB' 'Active(anon): 6775776 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533852 kB' 'Mapped: 194568 kB' 'Shmem: 6245748 kB' 'KReclaimable: 232796 kB' 'Slab: 814000 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581204 kB' 'KernelStack: 20544 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8317568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315356 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.212 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.213 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.214 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175510576 kB' 'MemAvailable: 178382056 kB' 'Buffers: 3896 kB' 'Cached: 10141392 kB' 'SwapCached: 0 kB' 'Active: 7168456 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776448 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534020 kB' 'Mapped: 194568 kB' 'Shmem: 6245756 kB' 'KReclaimable: 232796 kB' 'Slab: 814000 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581204 kB' 'KernelStack: 20576 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8317956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315372 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.215 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.216 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.217 nr_hugepages=1024 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.217 resv_hugepages=0 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.217 surplus_hugepages=0 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.217 anon_hugepages=0 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.217 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175510076 kB' 'MemAvailable: 178381556 kB' 'Buffers: 3896 kB' 'Cached: 10141416 kB' 'SwapCached: 0 kB' 'Active: 7168476 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776468 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534044 kB' 'Mapped: 194568 kB' 'Shmem: 6245780 kB' 'KReclaimable: 232796 kB' 'Slab: 814000 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581204 kB' 'KernelStack: 20576 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315372 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.218 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85669524 kB' 'MemUsed: 11993160 kB' 'SwapCached: 0 kB' 'Active: 5001808 kB' 'Inactive: 3338236 kB' 'Active(anon): 4844268 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193220 kB' 'Mapped: 117380 kB' 'AnonPages: 150052 kB' 'Shmem: 4697444 kB' 'KernelStack: 11848 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126376 kB' 'Slab: 407908 kB' 'SReclaimable: 126376 kB' 'SUnreclaim: 281532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.219 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:48.220 node0=1024 expecting 1024 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:48.220 00:02:48.220 real 0m3.787s 00:02:48.220 user 0m1.249s 00:02:48.220 sys 0m1.802s 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:48.220 18:28:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:48.220 ************************************ 00:02:48.220 END TEST default_setup 00:02:48.220 ************************************ 00:02:48.220 18:28:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:48.220 18:28:04 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:48.220 18:28:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.220 18:28:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.220 18:28:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.220 ************************************ 00:02:48.220 START TEST per_node_1G_alloc 00:02:48.220 ************************************ 00:02:48.220 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:48.220 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:48.220 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.480 18:28:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:51.017 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:51.017 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:51.017 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175546940 kB' 'MemAvailable: 178418420 kB' 'Buffers: 3896 kB' 'Cached: 10141504 kB' 'SwapCached: 0 kB' 'Active: 7169124 kB' 'Inactive: 3507524 kB' 'Active(anon): 6777116 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534404 kB' 'Mapped: 194608 kB' 'Shmem: 6245868 kB' 'KReclaimable: 232796 kB' 'Slab: 814220 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581424 kB' 'KernelStack: 20544 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8320924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315772 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.017 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175548224 kB' 'MemAvailable: 178419704 kB' 'Buffers: 3896 kB' 'Cached: 10141508 kB' 'SwapCached: 0 kB' 'Active: 7170088 kB' 'Inactive: 3507524 kB' 'Active(anon): 6778080 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535416 kB' 'Mapped: 195112 kB' 'Shmem: 6245872 kB' 'KReclaimable: 232796 kB' 'Slab: 814176 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581380 kB' 'KernelStack: 20720 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8322036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315724 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175545820 kB' 'MemAvailable: 178417300 kB' 'Buffers: 3896 kB' 'Cached: 10141524 kB' 'SwapCached: 0 kB' 'Active: 7175456 kB' 'Inactive: 3507524 kB' 'Active(anon): 6783448 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540836 kB' 'Mapped: 195112 kB' 'Shmem: 6245888 kB' 'KReclaimable: 232796 kB' 'Slab: 814104 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581308 kB' 'KernelStack: 20864 kB' 'PageTables: 9528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8327088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315776 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.281 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.282 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.283 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.284 nr_hugepages=1024 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.284 resv_hugepages=0 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.284 surplus_hugepages=0 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.284 anon_hugepages=0 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175547672 kB' 'MemAvailable: 178419152 kB' 'Buffers: 3896 kB' 'Cached: 10141548 kB' 'SwapCached: 0 kB' 'Active: 7170216 kB' 'Inactive: 3507524 kB' 'Active(anon): 6778208 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535472 kB' 'Mapped: 194876 kB' 'Shmem: 6245912 kB' 'KReclaimable: 232796 kB' 'Slab: 814096 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581300 kB' 'KernelStack: 20864 kB' 'PageTables: 9568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8319496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315756 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.284 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.285 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.286 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86754768 kB' 'MemUsed: 10907916 kB' 'SwapCached: 0 kB' 'Active: 5002684 kB' 'Inactive: 3338236 kB' 'Active(anon): 4845144 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193296 kB' 'Mapped: 117420 kB' 'AnonPages: 150864 kB' 'Shmem: 4697520 kB' 'KernelStack: 11848 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126376 kB' 'Slab: 407980 kB' 'SReclaimable: 126376 kB' 'SUnreclaim: 281604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.287 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.288 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88796340 kB' 'MemUsed: 4922128 kB' 'SwapCached: 0 kB' 'Active: 2167528 kB' 'Inactive: 169288 kB' 'Active(anon): 1933060 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1952172 kB' 'Mapped: 77188 kB' 'AnonPages: 384496 kB' 'Shmem: 1548416 kB' 'KernelStack: 8920 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106420 kB' 'Slab: 406116 kB' 'SReclaimable: 106420 kB' 'SUnreclaim: 299696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.289 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:51.290 node0=512 expecting 512 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:51.290 node1=512 expecting 512 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:51.290 00:02:51.290 real 0m2.957s 00:02:51.290 user 0m1.222s 00:02:51.290 sys 0m1.797s 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:51.290 18:28:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:51.290 ************************************ 00:02:51.290 END TEST per_node_1G_alloc 00:02:51.290 ************************************ 00:02:51.290 18:28:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:51.290 18:28:07 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:51.290 18:28:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.290 18:28:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.290 18:28:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.290 ************************************ 00:02:51.290 START TEST even_2G_alloc 00:02:51.290 ************************************ 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.290 18:28:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.822 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.822 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.822 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175559132 kB' 'MemAvailable: 178430612 kB' 'Buffers: 3896 kB' 'Cached: 10137812 kB' 'SwapCached: 0 kB' 'Active: 7165340 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773332 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534164 kB' 'Mapped: 193640 kB' 'Shmem: 6242176 kB' 'KReclaimable: 232796 kB' 'Slab: 813912 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581116 kB' 'KernelStack: 20768 kB' 'PageTables: 9604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.823 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.824 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175558688 kB' 'MemAvailable: 178430168 kB' 'Buffers: 3896 kB' 'Cached: 10137816 kB' 'SwapCached: 0 kB' 'Active: 7164332 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772324 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533672 kB' 'Mapped: 193556 kB' 'Shmem: 6242180 kB' 'KReclaimable: 232796 kB' 'Slab: 813880 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581084 kB' 'KernelStack: 20832 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.088 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.089 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175558156 kB' 'MemAvailable: 178429636 kB' 'Buffers: 3896 kB' 'Cached: 10137832 kB' 'SwapCached: 0 kB' 'Active: 7164500 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772492 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533844 kB' 'Mapped: 193556 kB' 'Shmem: 6242196 kB' 'KReclaimable: 232796 kB' 'Slab: 813880 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581084 kB' 'KernelStack: 20864 kB' 'PageTables: 9404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.090 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.091 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.092 nr_hugepages=1024 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.092 resv_hugepages=0 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.092 surplus_hugepages=0 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.092 anon_hugepages=0 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.092 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175558008 kB' 'MemAvailable: 178429488 kB' 'Buffers: 3896 kB' 'Cached: 10137856 kB' 'SwapCached: 0 kB' 'Active: 7164104 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772096 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533380 kB' 'Mapped: 193556 kB' 'Shmem: 6242220 kB' 'KReclaimable: 232796 kB' 'Slab: 813880 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581084 kB' 'KernelStack: 20736 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.093 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.094 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86758432 kB' 'MemUsed: 10904252 kB' 'SwapCached: 0 kB' 'Active: 5001060 kB' 'Inactive: 3338236 kB' 'Active(anon): 4843520 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193400 kB' 'Mapped: 117128 kB' 'AnonPages: 149180 kB' 'Shmem: 4697624 kB' 'KernelStack: 11896 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126376 kB' 'Slab: 407584 kB' 'SReclaimable: 126376 kB' 'SUnreclaim: 281208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.095 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.096 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88797588 kB' 'MemUsed: 4920880 kB' 'SwapCached: 0 kB' 'Active: 2163884 kB' 'Inactive: 169288 kB' 'Active(anon): 1929416 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1948352 kB' 'Mapped: 76436 kB' 'AnonPages: 384884 kB' 'Shmem: 1544596 kB' 'KernelStack: 8968 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106420 kB' 'Slab: 406296 kB' 'SReclaimable: 106420 kB' 'SUnreclaim: 299876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.097 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:54.098 node0=512 expecting 512 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:54.098 node1=512 expecting 512 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:54.098 00:02:54.098 real 0m2.744s 00:02:54.098 user 0m1.085s 00:02:54.098 sys 0m1.684s 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:54.098 18:28:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:54.098 ************************************ 00:02:54.098 END TEST even_2G_alloc 00:02:54.098 ************************************ 00:02:54.099 18:28:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:54.099 18:28:10 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:54.099 18:28:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.099 18:28:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.099 18:28:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:54.099 ************************************ 00:02:54.099 START TEST odd_alloc 00:02:54.099 ************************************ 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.099 18:28:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.634 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.634 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.634 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175549856 kB' 'MemAvailable: 178421336 kB' 'Buffers: 3896 kB' 'Cached: 10138160 kB' 'SwapCached: 0 kB' 'Active: 7166160 kB' 'Inactive: 3507524 kB' 'Active(anon): 6774152 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534368 kB' 'Mapped: 193648 kB' 'Shmem: 6242524 kB' 'KReclaimable: 232796 kB' 'Slab: 814024 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581228 kB' 'KernelStack: 20736 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8303192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315756 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.895 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.896 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175549488 kB' 'MemAvailable: 178420968 kB' 'Buffers: 3896 kB' 'Cached: 10138164 kB' 'SwapCached: 0 kB' 'Active: 7165280 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773272 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533968 kB' 'Mapped: 193568 kB' 'Shmem: 6242528 kB' 'KReclaimable: 232796 kB' 'Slab: 813864 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581068 kB' 'KernelStack: 20640 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8303460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.897 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.898 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175549788 kB' 'MemAvailable: 178421268 kB' 'Buffers: 3896 kB' 'Cached: 10138180 kB' 'SwapCached: 0 kB' 'Active: 7165120 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773112 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533824 kB' 'Mapped: 193568 kB' 'Shmem: 6242544 kB' 'KReclaimable: 232796 kB' 'Slab: 813864 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581068 kB' 'KernelStack: 20608 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8300864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.899 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:56.900 nr_hugepages=1025 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.900 resv_hugepages=0 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.900 surplus_hugepages=0 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.900 anon_hugepages=0 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.900 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175551228 kB' 'MemAvailable: 178422708 kB' 'Buffers: 3896 kB' 'Cached: 10138200 kB' 'SwapCached: 0 kB' 'Active: 7164544 kB' 'Inactive: 3507524 kB' 'Active(anon): 6772536 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533248 kB' 'Mapped: 193544 kB' 'Shmem: 6242564 kB' 'KReclaimable: 232796 kB' 'Slab: 813928 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581132 kB' 'KernelStack: 20496 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8300884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.901 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86738796 kB' 'MemUsed: 10923888 kB' 'SwapCached: 0 kB' 'Active: 5001240 kB' 'Inactive: 3338236 kB' 'Active(anon): 4843700 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193520 kB' 'Mapped: 117108 kB' 'AnonPages: 149148 kB' 'Shmem: 4697744 kB' 'KernelStack: 11816 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126376 kB' 'Slab: 407412 kB' 'SReclaimable: 126376 kB' 'SUnreclaim: 281036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.902 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.903 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88813132 kB' 'MemUsed: 4905336 kB' 'SwapCached: 0 kB' 'Active: 2163268 kB' 'Inactive: 169288 kB' 'Active(anon): 1928800 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1948556 kB' 'Mapped: 76436 kB' 'AnonPages: 384132 kB' 'Shmem: 1544800 kB' 'KernelStack: 8712 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106420 kB' 'Slab: 406516 kB' 'SReclaimable: 106420 kB' 'SUnreclaim: 300096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.904 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.165 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.166 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:57.167 node0=512 expecting 513 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:57.167 node1=513 expecting 512 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:57.167 00:02:57.167 real 0m2.883s 00:02:57.167 user 0m1.162s 00:02:57.167 sys 0m1.786s 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.167 18:28:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:57.167 ************************************ 00:02:57.167 END TEST odd_alloc 00:02:57.167 ************************************ 00:02:57.167 18:28:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:57.167 18:28:13 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:57.167 18:28:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.167 18:28:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.167 18:28:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:57.167 ************************************ 00:02:57.167 START TEST custom_alloc 00:02:57.167 ************************************ 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.167 18:28:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.705 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.705 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.705 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.705 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:59.705 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:59.705 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.705 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.705 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.705 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.705 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174515172 kB' 'MemAvailable: 177386652 kB' 'Buffers: 3896 kB' 'Cached: 10138332 kB' 'SwapCached: 0 kB' 'Active: 7165996 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773988 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534044 kB' 'Mapped: 194160 kB' 'Shmem: 6242696 kB' 'KReclaimable: 232796 kB' 'Slab: 813892 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581096 kB' 'KernelStack: 20496 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8303164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.706 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174507808 kB' 'MemAvailable: 177379288 kB' 'Buffers: 3896 kB' 'Cached: 10138336 kB' 'SwapCached: 0 kB' 'Active: 7170240 kB' 'Inactive: 3507524 kB' 'Active(anon): 6778232 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539308 kB' 'Mapped: 194064 kB' 'Shmem: 6242700 kB' 'KReclaimable: 232796 kB' 'Slab: 813880 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581084 kB' 'KernelStack: 20496 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8307816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315520 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.707 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.708 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.709 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174507840 kB' 'MemAvailable: 177379320 kB' 'Buffers: 3896 kB' 'Cached: 10138352 kB' 'SwapCached: 0 kB' 'Active: 7170832 kB' 'Inactive: 3507524 kB' 'Active(anon): 6778824 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539360 kB' 'Mapped: 194348 kB' 'Shmem: 6242716 kB' 'KReclaimable: 232796 kB' 'Slab: 813880 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581084 kB' 'KernelStack: 20496 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8307836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315504 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.973 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.974 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:59.974 nr_hugepages=1536 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.975 resv_hugepages=0 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.975 surplus_hugepages=0 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.975 anon_hugepages=0 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174507936 kB' 'MemAvailable: 177379416 kB' 'Buffers: 3896 kB' 'Cached: 10138392 kB' 'SwapCached: 0 kB' 'Active: 7165144 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773136 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533680 kB' 'Mapped: 193560 kB' 'Shmem: 6242756 kB' 'KReclaimable: 232796 kB' 'Slab: 813880 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 581084 kB' 'KernelStack: 20480 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8301736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.975 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.976 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86730192 kB' 'MemUsed: 10932492 kB' 'SwapCached: 0 kB' 'Active: 5000708 kB' 'Inactive: 3338236 kB' 'Active(anon): 4843168 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193620 kB' 'Mapped: 117124 kB' 'AnonPages: 148476 kB' 'Shmem: 4697844 kB' 'KernelStack: 11768 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126376 kB' 'Slab: 407508 kB' 'SReclaimable: 126376 kB' 'SUnreclaim: 281132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.977 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87778412 kB' 'MemUsed: 5940056 kB' 'SwapCached: 0 kB' 'Active: 2164496 kB' 'Inactive: 169288 kB' 'Active(anon): 1930028 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1948688 kB' 'Mapped: 76436 kB' 'AnonPages: 385236 kB' 'Shmem: 1544932 kB' 'KernelStack: 8712 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106420 kB' 'Slab: 406372 kB' 'SReclaimable: 106420 kB' 'SUnreclaim: 299952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.978 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:59.979 node0=512 expecting 512 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:59.979 node1=1024 expecting 1024 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:59.979 00:02:59.979 real 0m2.855s 00:02:59.979 user 0m1.212s 00:02:59.979 sys 0m1.708s 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.979 18:28:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:59.979 ************************************ 00:02:59.979 END TEST custom_alloc 00:02:59.979 ************************************ 00:02:59.980 18:28:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:59.980 18:28:16 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:59.980 18:28:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.980 18:28:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.980 18:28:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.980 ************************************ 00:02:59.980 START TEST no_shrink_alloc 00:02:59.980 ************************************ 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.980 18:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.531 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.531 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.531 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175559140 kB' 'MemAvailable: 178430620 kB' 'Buffers: 3896 kB' 'Cached: 10138472 kB' 'SwapCached: 0 kB' 'Active: 7165992 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773984 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533852 kB' 'Mapped: 193656 kB' 'Shmem: 6242836 kB' 'KReclaimable: 232796 kB' 'Slab: 813756 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580960 kB' 'KernelStack: 20512 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8301916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.531 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.532 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.533 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175560012 kB' 'MemAvailable: 178431492 kB' 'Buffers: 3896 kB' 'Cached: 10138476 kB' 'SwapCached: 0 kB' 'Active: 7165696 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773688 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534064 kB' 'Mapped: 193580 kB' 'Shmem: 6242840 kB' 'KReclaimable: 232796 kB' 'Slab: 813752 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580956 kB' 'KernelStack: 20496 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8301932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.534 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.535 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175560320 kB' 'MemAvailable: 178431800 kB' 'Buffers: 3896 kB' 'Cached: 10138492 kB' 'SwapCached: 0 kB' 'Active: 7165848 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773840 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534800 kB' 'Mapped: 193580 kB' 'Shmem: 6242856 kB' 'KReclaimable: 232796 kB' 'Slab: 813744 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580948 kB' 'KernelStack: 20528 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8301588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.536 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.537 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.799 nr_hugepages=1024 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.799 resv_hugepages=0 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.799 surplus_hugepages=0 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.799 anon_hugepages=0 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175560928 kB' 'MemAvailable: 178432408 kB' 'Buffers: 3896 kB' 'Cached: 10138512 kB' 'SwapCached: 0 kB' 'Active: 7165840 kB' 'Inactive: 3507524 kB' 'Active(anon): 6773832 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534192 kB' 'Mapped: 193580 kB' 'Shmem: 6242876 kB' 'KReclaimable: 232796 kB' 'Slab: 813740 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580944 kB' 'KernelStack: 20480 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.799 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.800 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85700464 kB' 'MemUsed: 11962220 kB' 'SwapCached: 0 kB' 'Active: 5001160 kB' 'Inactive: 3338236 kB' 'Active(anon): 4843620 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193628 kB' 'Mapped: 117144 kB' 'AnonPages: 148912 kB' 'Shmem: 4697852 kB' 'KernelStack: 11784 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126376 kB' 'Slab: 407300 kB' 'SReclaimable: 126376 kB' 'SUnreclaim: 280924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.801 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:02.802 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:02.803 node0=1024 expecting 1024 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.803 18:28:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.344 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.344 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.344 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.344 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175542836 kB' 'MemAvailable: 178414316 kB' 'Buffers: 3896 kB' 'Cached: 10138608 kB' 'SwapCached: 0 kB' 'Active: 7167788 kB' 'Inactive: 3507524 kB' 'Active(anon): 6775780 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536204 kB' 'Mapped: 193588 kB' 'Shmem: 6242972 kB' 'KReclaimable: 232796 kB' 'Slab: 813388 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580592 kB' 'KernelStack: 20448 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.344 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.345 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175542984 kB' 'MemAvailable: 178414464 kB' 'Buffers: 3896 kB' 'Cached: 10138612 kB' 'SwapCached: 0 kB' 'Active: 7168392 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776384 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536792 kB' 'Mapped: 193588 kB' 'Shmem: 6242976 kB' 'KReclaimable: 232796 kB' 'Slab: 813468 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580672 kB' 'KernelStack: 20496 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.346 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175542984 kB' 'MemAvailable: 178414464 kB' 'Buffers: 3896 kB' 'Cached: 10138632 kB' 'SwapCached: 0 kB' 'Active: 7168440 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776432 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536816 kB' 'Mapped: 193588 kB' 'Shmem: 6242996 kB' 'KReclaimable: 232796 kB' 'Slab: 813468 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580672 kB' 'KernelStack: 20480 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.347 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.348 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.349 nr_hugepages=1024 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.349 resv_hugepages=0 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.349 surplus_hugepages=0 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.349 anon_hugepages=0 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175542788 kB' 'MemAvailable: 178414268 kB' 'Buffers: 3896 kB' 'Cached: 10138672 kB' 'SwapCached: 0 kB' 'Active: 7168096 kB' 'Inactive: 3507524 kB' 'Active(anon): 6776088 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536428 kB' 'Mapped: 193588 kB' 'Shmem: 6243036 kB' 'KReclaimable: 232796 kB' 'Slab: 813468 kB' 'SReclaimable: 232796 kB' 'SUnreclaim: 580672 kB' 'KernelStack: 20480 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8302628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2984916 kB' 'DirectMap2M: 15568896 kB' 'DirectMap1G: 183500800 kB' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.349 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.350 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85690076 kB' 'MemUsed: 11972608 kB' 'SwapCached: 0 kB' 'Active: 5001748 kB' 'Inactive: 3338236 kB' 'Active(anon): 4844208 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3338236 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193628 kB' 'Mapped: 117152 kB' 'AnonPages: 149464 kB' 'Shmem: 4697852 kB' 'KernelStack: 11752 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126376 kB' 'Slab: 407064 kB' 'SReclaimable: 126376 kB' 'SUnreclaim: 280688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.351 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.352 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:05.353 node0=1024 expecting 1024 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:05.353 00:03:05.353 real 0m5.426s 00:03:05.353 user 0m2.215s 00:03:05.353 sys 0m3.304s 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.353 18:28:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.353 ************************************ 00:03:05.353 END TEST no_shrink_alloc 00:03:05.353 ************************************ 00:03:05.624 18:28:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:05.624 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:05.624 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:05.624 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.624 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.624 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.624 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.624 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.625 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.625 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.625 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.625 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.625 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.625 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:05.625 18:28:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:05.625 00:03:05.625 real 0m21.203s 00:03:05.625 user 0m8.375s 00:03:05.625 sys 0m12.438s 00:03:05.625 18:28:22 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.625 18:28:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.625 ************************************ 00:03:05.625 END TEST hugepages 00:03:05.625 ************************************ 00:03:05.625 18:28:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:05.625 18:28:22 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:05.625 18:28:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.625 18:28:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.625 18:28:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.625 ************************************ 00:03:05.625 START TEST driver 00:03:05.625 ************************************ 00:03:05.625 18:28:22 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:05.625 * Looking for test storage... 00:03:05.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.625 18:28:22 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:05.625 18:28:22 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.625 18:28:22 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.946 18:28:25 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:08.946 18:28:25 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.946 18:28:25 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.946 18:28:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:09.205 ************************************ 00:03:09.205 START TEST guess_driver 00:03:09.205 ************************************ 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:09.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:09.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:09.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:09.205 Looking for driver=vfio-pci 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.205 18:28:25 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.742 18:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.677 18:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.677 18:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.677 18:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.677 18:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:12.677 18:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:12.677 18:28:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.677 18:28:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.869 00:03:16.869 real 0m7.528s 00:03:16.869 user 0m2.161s 00:03:16.869 sys 0m3.820s 00:03:16.869 18:28:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.869 18:28:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:16.869 ************************************ 00:03:16.869 END TEST guess_driver 00:03:16.869 ************************************ 00:03:16.869 18:28:33 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:16.869 00:03:16.869 real 0m11.088s 00:03:16.869 user 0m3.036s 00:03:16.869 sys 0m5.671s 00:03:16.869 18:28:33 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.869 18:28:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:16.869 ************************************ 00:03:16.869 END TEST driver 00:03:16.869 ************************************ 00:03:16.869 18:28:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:16.869 18:28:33 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:16.869 18:28:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.869 18:28:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.869 18:28:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.869 ************************************ 00:03:16.869 START TEST devices 00:03:16.869 ************************************ 00:03:16.869 18:28:33 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:16.869 * Looking for test storage... 00:03:16.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.869 18:28:33 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:16.869 18:28:33 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:16.869 18:28:33 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.869 18:28:33 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.161 18:28:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:20.161 18:28:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:20.161 18:28:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:20.161 18:28:36 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:20.161 No valid GPT data, bailing 00:03:20.162 18:28:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:20.162 18:28:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:20.162 18:28:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:20.162 18:28:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:20.162 18:28:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:20.162 18:28:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:20.162 18:28:36 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:20.162 18:28:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:20.162 18:28:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:20.162 18:28:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:20.162 18:28:36 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:20.162 18:28:36 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:20.162 18:28:36 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:20.162 18:28:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.162 18:28:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.162 18:28:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:20.162 ************************************ 00:03:20.162 START TEST nvme_mount 00:03:20.162 ************************************ 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:20.162 18:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:21.100 Creating new GPT entries in memory. 00:03:21.100 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:21.100 other utilities. 00:03:21.100 18:28:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:21.100 18:28:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.100 18:28:37 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:21.100 18:28:37 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:21.100 18:28:37 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:22.040 Creating new GPT entries in memory. 00:03:22.040 The operation has completed successfully. 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 892298 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:22.040 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.041 18:28:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:23.945 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.204 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.205 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:24.205 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:24.205 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:24.205 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:24.205 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:24.465 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:24.465 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:24.465 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:24.465 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:24.465 18:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:24.465 18:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:24.465 18:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.465 18:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:24.465 18:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.465 18:28:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.003 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.003 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:27.004 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.264 18:28:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.866 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:30.126 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:30.126 00:03:30.126 real 0m10.197s 00:03:30.126 user 0m2.817s 00:03:30.126 sys 0m4.944s 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.126 18:28:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:30.126 ************************************ 00:03:30.126 END TEST nvme_mount 00:03:30.126 ************************************ 00:03:30.126 18:28:46 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:30.126 18:28:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:30.126 18:28:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.126 18:28:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.126 18:28:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:30.126 ************************************ 00:03:30.126 START TEST dm_mount 00:03:30.126 ************************************ 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:30.126 18:28:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:31.062 Creating new GPT entries in memory. 00:03:31.062 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:31.062 other utilities. 00:03:31.062 18:28:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:31.062 18:28:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:31.062 18:28:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:31.062 18:28:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:31.062 18:28:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:32.439 Creating new GPT entries in memory. 00:03:32.439 The operation has completed successfully. 00:03:32.439 18:28:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:32.439 18:28:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.439 18:28:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.439 18:28:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.439 18:28:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:33.376 The operation has completed successfully. 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 896286 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.377 18:28:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:35.922 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:35.923 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.923 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:35.923 18:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:35.923 18:28:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.923 18:28:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:38.458 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.458 18:28:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:38.458 00:03:38.458 real 0m8.283s 00:03:38.458 user 0m1.882s 00:03:38.458 sys 0m3.367s 00:03:38.459 18:28:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.459 18:28:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:38.459 ************************************ 00:03:38.459 END TEST dm_mount 00:03:38.459 ************************************ 00:03:38.459 18:28:55 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:38.459 18:28:55 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:38.459 18:28:55 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:38.459 18:28:55 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.459 18:28:55 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.459 18:28:55 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:38.459 18:28:55 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.459 18:28:55 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.718 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:38.718 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:38.718 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:38.718 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:38.718 18:28:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:38.718 18:28:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.718 18:28:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.718 18:28:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.718 18:28:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.718 18:28:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.718 18:28:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:38.718 00:03:38.718 real 0m22.004s 00:03:38.718 user 0m5.934s 00:03:38.718 sys 0m10.447s 00:03:38.718 18:28:55 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.718 18:28:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:38.718 ************************************ 00:03:38.718 END TEST devices 00:03:38.718 ************************************ 00:03:38.718 18:28:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:38.718 00:03:38.718 real 1m12.717s 00:03:38.718 user 0m23.364s 00:03:38.718 sys 0m39.474s 00:03:38.718 18:28:55 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.718 18:28:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.718 ************************************ 00:03:38.718 END TEST setup.sh 00:03:38.718 ************************************ 00:03:38.718 18:28:55 -- common/autotest_common.sh@1142 -- # return 0 00:03:38.718 18:28:55 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:41.255 Hugepages 00:03:41.255 node hugesize free / total 00:03:41.255 node0 1048576kB 0 / 0 00:03:41.255 node0 2048kB 2048 / 2048 00:03:41.255 node1 1048576kB 0 / 0 00:03:41.255 node1 2048kB 0 / 0 00:03:41.255 00:03:41.255 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.255 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:41.515 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:41.515 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:41.515 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:41.515 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:41.515 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:41.515 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:41.515 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:41.515 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:41.515 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:41.515 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:41.515 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:41.515 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:41.515 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:41.515 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:41.515 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:41.515 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:41.515 18:28:58 -- spdk/autotest.sh@130 -- # uname -s 00:03:41.515 18:28:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:41.515 18:28:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:41.515 18:28:58 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.808 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.808 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.066 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:45.324 18:29:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:46.261 18:29:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:46.261 18:29:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:46.261 18:29:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:46.261 18:29:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:46.261 18:29:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:46.261 18:29:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:46.261 18:29:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.261 18:29:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:46.261 18:29:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:46.261 18:29:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:46.261 18:29:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:03:46.261 18:29:02 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.796 Waiting for block devices as requested 00:03:49.056 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:49.056 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:49.056 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:49.315 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:49.315 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:49.315 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:49.315 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:49.609 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:49.609 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:49.609 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:49.609 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:49.885 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:49.885 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:49.885 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:50.145 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:50.145 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:50.145 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:50.145 18:29:06 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:50.145 18:29:06 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:50.145 18:29:06 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:50.145 18:29:06 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:03:50.145 18:29:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:50.145 18:29:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:50.145 18:29:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:50.145 18:29:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:50.145 18:29:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:50.145 18:29:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:50.145 18:29:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:50.145 18:29:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:50.145 18:29:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:50.404 18:29:06 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:03:50.404 18:29:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:50.404 18:29:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:50.404 18:29:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:50.404 18:29:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:50.404 18:29:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:50.404 18:29:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:50.404 18:29:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:50.404 18:29:06 -- common/autotest_common.sh@1557 -- # continue 00:03:50.404 18:29:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:50.404 18:29:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.404 18:29:06 -- common/autotest_common.sh@10 -- # set +x 00:03:50.404 18:29:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:50.404 18:29:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.404 18:29:06 -- common/autotest_common.sh@10 -- # set +x 00:03:50.404 18:29:06 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.308 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.308 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.308 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.308 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.308 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.568 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.505 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.505 18:29:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:53.505 18:29:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.505 18:29:10 -- common/autotest_common.sh@10 -- # set +x 00:03:53.505 18:29:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:53.505 18:29:10 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:53.505 18:29:10 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.505 18:29:10 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:53.505 18:29:10 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:53.505 18:29:10 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:53.505 18:29:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:53.505 18:29:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:53.505 18:29:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.505 18:29:10 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.505 18:29:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:53.505 18:29:10 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:53.505 18:29:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:03:53.505 18:29:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:53.505 18:29:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:53.505 18:29:10 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:53.505 18:29:10 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:53.505 18:29:10 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:53.505 18:29:10 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:03:53.505 18:29:10 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:03:53.505 18:29:10 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=905030 00:03:53.505 18:29:10 -- common/autotest_common.sh@1598 -- # waitforlisten 905030 00:03:53.505 18:29:10 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.505 18:29:10 -- common/autotest_common.sh@829 -- # '[' -z 905030 ']' 00:03:53.505 18:29:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.505 18:29:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:53.505 18:29:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.505 18:29:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:53.505 18:29:10 -- common/autotest_common.sh@10 -- # set +x 00:03:53.764 [2024-07-15 18:29:10.226646] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:03:53.764 [2024-07-15 18:29:10.226693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905030 ] 00:03:53.764 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.764 [2024-07-15 18:29:10.280522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.764 [2024-07-15 18:29:10.354617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.331 18:29:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:54.331 18:29:11 -- common/autotest_common.sh@862 -- # return 0 00:03:54.331 18:29:11 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:54.331 18:29:11 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:54.331 18:29:11 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:57.620 nvme0n1 00:03:57.620 18:29:13 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:57.620 [2024-07-15 18:29:14.139715] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:57.620 request: 00:03:57.620 { 00:03:57.620 "nvme_ctrlr_name": "nvme0", 00:03:57.620 "password": "test", 00:03:57.620 "method": "bdev_nvme_opal_revert", 00:03:57.620 "req_id": 1 00:03:57.620 } 00:03:57.620 Got JSON-RPC error response 00:03:57.620 response: 00:03:57.620 { 00:03:57.620 "code": -32602, 00:03:57.620 "message": "Invalid parameters" 00:03:57.620 } 00:03:57.620 18:29:14 -- common/autotest_common.sh@1604 -- # true 00:03:57.620 18:29:14 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:57.620 18:29:14 -- common/autotest_common.sh@1608 -- # killprocess 905030 00:03:57.620 18:29:14 -- common/autotest_common.sh@948 -- # '[' -z 905030 ']' 00:03:57.620 18:29:14 -- common/autotest_common.sh@952 -- # kill -0 905030 00:03:57.620 18:29:14 -- common/autotest_common.sh@953 -- # uname 00:03:57.620 18:29:14 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:57.620 18:29:14 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 905030 00:03:57.620 18:29:14 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:57.620 18:29:14 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:57.620 18:29:14 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 905030' 00:03:57.620 killing process with pid 905030 00:03:57.620 18:29:14 -- common/autotest_common.sh@967 -- # kill 905030 00:03:57.620 18:29:14 -- common/autotest_common.sh@972 -- # wait 905030 00:03:59.527 18:29:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:59.527 18:29:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:59.527 18:29:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:59.527 18:29:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:59.527 18:29:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:59.527 18:29:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.527 18:29:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.527 18:29:15 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:59.527 18:29:15 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.527 18:29:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.527 18:29:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.527 18:29:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.527 ************************************ 00:03:59.527 START TEST env 00:03:59.527 ************************************ 00:03:59.527 18:29:15 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:59.527 * Looking for test storage... 00:03:59.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:59.527 18:29:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.527 18:29:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.527 18:29:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.527 18:29:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.527 ************************************ 00:03:59.527 START TEST env_memory 00:03:59.527 ************************************ 00:03:59.527 18:29:15 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.527 00:03:59.527 00:03:59.527 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.527 http://cunit.sourceforge.net/ 00:03:59.527 00:03:59.527 00:03:59.527 Suite: memory 00:03:59.527 Test: alloc and free memory map ...[2024-07-15 18:29:16.031323] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:59.527 passed 00:03:59.527 Test: mem map translation ...[2024-07-15 18:29:16.049182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:59.527 [2024-07-15 18:29:16.049195] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:59.527 [2024-07-15 18:29:16.049232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:59.527 [2024-07-15 18:29:16.049239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:59.527 passed 00:03:59.527 Test: mem map registration ...[2024-07-15 18:29:16.085738] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:59.527 [2024-07-15 18:29:16.085755] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:59.527 passed 00:03:59.527 Test: mem map adjacent registrations ...passed 00:03:59.527 00:03:59.527 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.527 suites 1 1 n/a 0 0 00:03:59.527 tests 4 4 4 0 0 00:03:59.527 asserts 152 152 152 0 n/a 00:03:59.527 00:03:59.527 Elapsed time = 0.132 seconds 00:03:59.527 00:03:59.527 real 0m0.144s 00:03:59.527 user 0m0.138s 00:03:59.527 sys 0m0.006s 00:03:59.527 18:29:16 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.527 18:29:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:59.527 ************************************ 00:03:59.527 END TEST env_memory 00:03:59.527 ************************************ 00:03:59.527 18:29:16 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.527 18:29:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.527 18:29:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.527 18:29:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.527 18:29:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.527 ************************************ 00:03:59.527 START TEST env_vtophys 00:03:59.527 ************************************ 00:03:59.527 18:29:16 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.527 EAL: lib.eal log level changed from notice to debug 00:03:59.527 EAL: Detected lcore 0 as core 0 on socket 0 00:03:59.527 EAL: Detected lcore 1 as core 1 on socket 0 00:03:59.527 EAL: Detected lcore 2 as core 2 on socket 0 00:03:59.527 EAL: Detected lcore 3 as core 3 on socket 0 00:03:59.527 EAL: Detected lcore 4 as core 4 on socket 0 00:03:59.527 EAL: Detected lcore 5 as core 5 on socket 0 00:03:59.527 EAL: Detected lcore 6 as core 6 on socket 0 00:03:59.527 EAL: Detected lcore 7 as core 8 on socket 0 00:03:59.527 EAL: Detected lcore 8 as core 9 on socket 0 00:03:59.527 EAL: Detected lcore 9 as core 10 on socket 0 00:03:59.527 EAL: Detected lcore 10 as core 11 on socket 0 00:03:59.527 EAL: Detected lcore 11 as core 12 on socket 0 00:03:59.527 EAL: Detected lcore 12 as core 13 on socket 0 00:03:59.527 EAL: Detected lcore 13 as core 16 on socket 0 00:03:59.527 EAL: Detected lcore 14 as core 17 on socket 0 00:03:59.527 EAL: Detected lcore 15 as core 18 on socket 0 00:03:59.527 EAL: Detected lcore 16 as core 19 on socket 0 00:03:59.527 EAL: Detected lcore 17 as core 20 on socket 0 00:03:59.527 EAL: Detected lcore 18 as core 21 on socket 0 00:03:59.527 EAL: Detected lcore 19 as core 25 on socket 0 00:03:59.527 EAL: Detected lcore 20 as core 26 on socket 0 00:03:59.527 EAL: Detected lcore 21 as core 27 on socket 0 00:03:59.527 EAL: Detected lcore 22 as core 28 on socket 0 00:03:59.527 EAL: Detected lcore 23 as core 29 on socket 0 00:03:59.527 EAL: Detected lcore 24 as core 0 on socket 1 00:03:59.527 EAL: Detected lcore 25 as core 1 on socket 1 00:03:59.527 EAL: Detected lcore 26 as core 2 on socket 1 00:03:59.527 EAL: Detected lcore 27 as core 3 on socket 1 00:03:59.527 EAL: Detected lcore 28 as core 4 on socket 1 00:03:59.527 EAL: Detected lcore 29 as core 5 on socket 1 00:03:59.527 EAL: Detected lcore 30 as core 6 on socket 1 00:03:59.527 EAL: Detected lcore 31 as core 9 on socket 1 00:03:59.527 EAL: Detected lcore 32 as core 10 on socket 1 00:03:59.527 EAL: Detected lcore 33 as core 11 on socket 1 00:03:59.527 EAL: Detected lcore 34 as core 12 on socket 1 00:03:59.527 EAL: Detected lcore 35 as core 13 on socket 1 00:03:59.527 EAL: Detected lcore 36 as core 16 on socket 1 00:03:59.527 EAL: Detected lcore 37 as core 17 on socket 1 00:03:59.527 EAL: Detected lcore 38 as core 18 on socket 1 00:03:59.527 EAL: Detected lcore 39 as core 19 on socket 1 00:03:59.527 EAL: Detected lcore 40 as core 20 on socket 1 00:03:59.527 EAL: Detected lcore 41 as core 21 on socket 1 00:03:59.527 EAL: Detected lcore 42 as core 24 on socket 1 00:03:59.527 EAL: Detected lcore 43 as core 25 on socket 1 00:03:59.527 EAL: Detected lcore 44 as core 26 on socket 1 00:03:59.527 EAL: Detected lcore 45 as core 27 on socket 1 00:03:59.527 EAL: Detected lcore 46 as core 28 on socket 1 00:03:59.527 EAL: Detected lcore 47 as core 29 on socket 1 00:03:59.527 EAL: Detected lcore 48 as core 0 on socket 0 00:03:59.527 EAL: Detected lcore 49 as core 1 on socket 0 00:03:59.527 EAL: Detected lcore 50 as core 2 on socket 0 00:03:59.527 EAL: Detected lcore 51 as core 3 on socket 0 00:03:59.527 EAL: Detected lcore 52 as core 4 on socket 0 00:03:59.527 EAL: Detected lcore 53 as core 5 on socket 0 00:03:59.527 EAL: Detected lcore 54 as core 6 on socket 0 00:03:59.527 EAL: Detected lcore 55 as core 8 on socket 0 00:03:59.527 EAL: Detected lcore 56 as core 9 on socket 0 00:03:59.527 EAL: Detected lcore 57 as core 10 on socket 0 00:03:59.527 EAL: Detected lcore 58 as core 11 on socket 0 00:03:59.527 EAL: Detected lcore 59 as core 12 on socket 0 00:03:59.527 EAL: Detected lcore 60 as core 13 on socket 0 00:03:59.527 EAL: Detected lcore 61 as core 16 on socket 0 00:03:59.527 EAL: Detected lcore 62 as core 17 on socket 0 00:03:59.527 EAL: Detected lcore 63 as core 18 on socket 0 00:03:59.527 EAL: Detected lcore 64 as core 19 on socket 0 00:03:59.527 EAL: Detected lcore 65 as core 20 on socket 0 00:03:59.527 EAL: Detected lcore 66 as core 21 on socket 0 00:03:59.527 EAL: Detected lcore 67 as core 25 on socket 0 00:03:59.527 EAL: Detected lcore 68 as core 26 on socket 0 00:03:59.527 EAL: Detected lcore 69 as core 27 on socket 0 00:03:59.527 EAL: Detected lcore 70 as core 28 on socket 0 00:03:59.527 EAL: Detected lcore 71 as core 29 on socket 0 00:03:59.527 EAL: Detected lcore 72 as core 0 on socket 1 00:03:59.528 EAL: Detected lcore 73 as core 1 on socket 1 00:03:59.528 EAL: Detected lcore 74 as core 2 on socket 1 00:03:59.528 EAL: Detected lcore 75 as core 3 on socket 1 00:03:59.528 EAL: Detected lcore 76 as core 4 on socket 1 00:03:59.528 EAL: Detected lcore 77 as core 5 on socket 1 00:03:59.528 EAL: Detected lcore 78 as core 6 on socket 1 00:03:59.528 EAL: Detected lcore 79 as core 9 on socket 1 00:03:59.528 EAL: Detected lcore 80 as core 10 on socket 1 00:03:59.528 EAL: Detected lcore 81 as core 11 on socket 1 00:03:59.528 EAL: Detected lcore 82 as core 12 on socket 1 00:03:59.528 EAL: Detected lcore 83 as core 13 on socket 1 00:03:59.528 EAL: Detected lcore 84 as core 16 on socket 1 00:03:59.528 EAL: Detected lcore 85 as core 17 on socket 1 00:03:59.528 EAL: Detected lcore 86 as core 18 on socket 1 00:03:59.528 EAL: Detected lcore 87 as core 19 on socket 1 00:03:59.528 EAL: Detected lcore 88 as core 20 on socket 1 00:03:59.528 EAL: Detected lcore 89 as core 21 on socket 1 00:03:59.528 EAL: Detected lcore 90 as core 24 on socket 1 00:03:59.528 EAL: Detected lcore 91 as core 25 on socket 1 00:03:59.528 EAL: Detected lcore 92 as core 26 on socket 1 00:03:59.528 EAL: Detected lcore 93 as core 27 on socket 1 00:03:59.528 EAL: Detected lcore 94 as core 28 on socket 1 00:03:59.528 EAL: Detected lcore 95 as core 29 on socket 1 00:03:59.528 EAL: Maximum logical cores by configuration: 128 00:03:59.528 EAL: Detected CPU lcores: 96 00:03:59.528 EAL: Detected NUMA nodes: 2 00:03:59.528 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:59.528 EAL: Detected shared linkage of DPDK 00:03:59.528 EAL: No shared files mode enabled, IPC will be disabled 00:03:59.787 EAL: Bus pci wants IOVA as 'DC' 00:03:59.787 EAL: Buses did not request a specific IOVA mode. 00:03:59.787 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:59.787 EAL: Selected IOVA mode 'VA' 00:03:59.787 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.787 EAL: Probing VFIO support... 00:03:59.787 EAL: IOMMU type 1 (Type 1) is supported 00:03:59.787 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:59.787 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:59.787 EAL: VFIO support initialized 00:03:59.787 EAL: Ask a virtual area of 0x2e000 bytes 00:03:59.787 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:59.787 EAL: Setting up physically contiguous memory... 00:03:59.787 EAL: Setting maximum number of open files to 524288 00:03:59.787 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:59.787 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:59.787 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:59.787 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.787 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:59.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.787 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.787 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:59.787 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:59.787 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.787 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:59.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.787 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.787 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:59.787 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:59.787 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.787 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:59.787 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.787 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.787 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:59.787 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:59.788 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.788 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:59.788 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:59.788 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.788 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:59.788 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:59.788 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:59.788 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.788 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:59.788 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.788 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.788 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:59.788 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:59.788 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.788 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:59.788 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.788 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.788 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:59.788 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:59.788 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.788 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:59.788 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.788 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.788 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:59.788 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:59.788 EAL: Ask a virtual area of 0x61000 bytes 00:03:59.788 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:59.788 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:59.788 EAL: Ask a virtual area of 0x400000000 bytes 00:03:59.788 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:59.788 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:59.788 EAL: Hugepages will be freed exactly as allocated. 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: TSC frequency is ~2300000 KHz 00:03:59.788 EAL: Main lcore 0 is ready (tid=7fe849a50a00;cpuset=[0]) 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 0 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 2MB 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:59.788 EAL: Mem event callback 'spdk:(nil)' registered 00:03:59.788 00:03:59.788 00:03:59.788 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.788 http://cunit.sourceforge.net/ 00:03:59.788 00:03:59.788 00:03:59.788 Suite: components_suite 00:03:59.788 Test: vtophys_malloc_test ...passed 00:03:59.788 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 4MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was shrunk by 4MB 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 6MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was shrunk by 6MB 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 10MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was shrunk by 10MB 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 18MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was shrunk by 18MB 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 34MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was shrunk by 34MB 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 66MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was shrunk by 66MB 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 130MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was shrunk by 130MB 00:03:59.788 EAL: Trying to obtain current memory policy. 00:03:59.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.788 EAL: Restoring previous memory policy: 4 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.788 EAL: request: mp_malloc_sync 00:03:59.788 EAL: No shared files mode enabled, IPC is disabled 00:03:59.788 EAL: Heap on socket 0 was expanded by 258MB 00:03:59.788 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.047 EAL: request: mp_malloc_sync 00:04:00.048 EAL: No shared files mode enabled, IPC is disabled 00:04:00.048 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.048 EAL: Trying to obtain current memory policy. 00:04:00.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.048 EAL: Restoring previous memory policy: 4 00:04:00.048 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.048 EAL: request: mp_malloc_sync 00:04:00.048 EAL: No shared files mode enabled, IPC is disabled 00:04:00.048 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.048 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.307 EAL: request: mp_malloc_sync 00:04:00.307 EAL: No shared files mode enabled, IPC is disabled 00:04:00.307 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.307 EAL: Trying to obtain current memory policy. 00:04:00.307 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.307 EAL: Restoring previous memory policy: 4 00:04:00.307 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.307 EAL: request: mp_malloc_sync 00:04:00.307 EAL: No shared files mode enabled, IPC is disabled 00:04:00.307 EAL: Heap on socket 0 was expanded by 1026MB 00:04:00.566 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.566 EAL: request: mp_malloc_sync 00:04:00.566 EAL: No shared files mode enabled, IPC is disabled 00:04:00.566 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:00.566 passed 00:04:00.566 00:04:00.566 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.566 suites 1 1 n/a 0 0 00:04:00.566 tests 2 2 2 0 0 00:04:00.566 asserts 497 497 497 0 n/a 00:04:00.566 00:04:00.566 Elapsed time = 0.958 seconds 00:04:00.566 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.566 EAL: request: mp_malloc_sync 00:04:00.566 EAL: No shared files mode enabled, IPC is disabled 00:04:00.566 EAL: Heap on socket 0 was shrunk by 2MB 00:04:00.566 EAL: No shared files mode enabled, IPC is disabled 00:04:00.566 EAL: No shared files mode enabled, IPC is disabled 00:04:00.566 EAL: No shared files mode enabled, IPC is disabled 00:04:00.566 00:04:00.566 real 0m1.066s 00:04:00.566 user 0m0.640s 00:04:00.566 sys 0m0.401s 00:04:00.566 18:29:17 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.567 18:29:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:00.567 ************************************ 00:04:00.567 END TEST env_vtophys 00:04:00.567 ************************************ 00:04:00.826 18:29:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:00.826 18:29:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:00.826 18:29:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.826 18:29:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.826 18:29:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.826 ************************************ 00:04:00.826 START TEST env_pci 00:04:00.826 ************************************ 00:04:00.826 18:29:17 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:00.826 00:04:00.826 00:04:00.826 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.826 http://cunit.sourceforge.net/ 00:04:00.826 00:04:00.826 00:04:00.826 Suite: pci 00:04:00.826 Test: pci_hook ...[2024-07-15 18:29:17.355439] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 906368 has claimed it 00:04:00.826 EAL: Cannot find device (10000:00:01.0) 00:04:00.826 EAL: Failed to attach device on primary process 00:04:00.826 passed 00:04:00.826 00:04:00.826 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.826 suites 1 1 n/a 0 0 00:04:00.826 tests 1 1 1 0 0 00:04:00.826 asserts 25 25 25 0 n/a 00:04:00.826 00:04:00.826 Elapsed time = 0.028 seconds 00:04:00.826 00:04:00.826 real 0m0.047s 00:04:00.826 user 0m0.015s 00:04:00.826 sys 0m0.032s 00:04:00.826 18:29:17 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.826 18:29:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:00.826 ************************************ 00:04:00.826 END TEST env_pci 00:04:00.826 ************************************ 00:04:00.826 18:29:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:00.826 18:29:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:00.826 18:29:17 env -- env/env.sh@15 -- # uname 00:04:00.826 18:29:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:00.826 18:29:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:00.826 18:29:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.826 18:29:17 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:00.826 18:29:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.826 18:29:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.826 ************************************ 00:04:00.826 START TEST env_dpdk_post_init 00:04:00.826 ************************************ 00:04:00.826 18:29:17 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:00.826 EAL: Detected CPU lcores: 96 00:04:00.826 EAL: Detected NUMA nodes: 2 00:04:00.826 EAL: Detected shared linkage of DPDK 00:04:00.826 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.826 EAL: Selected IOVA mode 'VA' 00:04:00.826 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.826 EAL: VFIO support initialized 00:04:00.826 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.086 EAL: Using IOMMU type 1 (Type 1) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:01.086 EAL: Ignore mapping IO port bar(1) 00:04:01.086 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:02.024 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:02.024 EAL: Ignore mapping IO port bar(1) 00:04:02.024 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:05.312 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:05.312 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:05.312 Starting DPDK initialization... 00:04:05.312 Starting SPDK post initialization... 00:04:05.312 SPDK NVMe probe 00:04:05.312 Attaching to 0000:5e:00.0 00:04:05.312 Attached to 0000:5e:00.0 00:04:05.312 Cleaning up... 00:04:05.312 00:04:05.312 real 0m4.326s 00:04:05.312 user 0m3.262s 00:04:05.312 sys 0m0.136s 00:04:05.312 18:29:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.312 18:29:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 END TEST env_dpdk_post_init 00:04:05.312 ************************************ 00:04:05.312 18:29:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.312 18:29:21 env -- env/env.sh@26 -- # uname 00:04:05.312 18:29:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:05.312 18:29:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.312 18:29:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.312 18:29:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.312 18:29:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 START TEST env_mem_callbacks 00:04:05.312 ************************************ 00:04:05.312 18:29:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.312 EAL: Detected CPU lcores: 96 00:04:05.312 EAL: Detected NUMA nodes: 2 00:04:05.312 EAL: Detected shared linkage of DPDK 00:04:05.312 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.312 EAL: Selected IOVA mode 'VA' 00:04:05.312 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.312 EAL: VFIO support initialized 00:04:05.312 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.312 00:04:05.312 00:04:05.312 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.312 http://cunit.sourceforge.net/ 00:04:05.312 00:04:05.312 00:04:05.312 Suite: memory 00:04:05.312 Test: test ... 00:04:05.312 register 0x200000200000 2097152 00:04:05.312 malloc 3145728 00:04:05.312 register 0x200000400000 4194304 00:04:05.312 buf 0x200000500000 len 3145728 PASSED 00:04:05.312 malloc 64 00:04:05.312 buf 0x2000004fff40 len 64 PASSED 00:04:05.312 malloc 4194304 00:04:05.312 register 0x200000800000 6291456 00:04:05.312 buf 0x200000a00000 len 4194304 PASSED 00:04:05.312 free 0x200000500000 3145728 00:04:05.312 free 0x2000004fff40 64 00:04:05.312 unregister 0x200000400000 4194304 PASSED 00:04:05.312 free 0x200000a00000 4194304 00:04:05.312 unregister 0x200000800000 6291456 PASSED 00:04:05.312 malloc 8388608 00:04:05.312 register 0x200000400000 10485760 00:04:05.312 buf 0x200000600000 len 8388608 PASSED 00:04:05.312 free 0x200000600000 8388608 00:04:05.312 unregister 0x200000400000 10485760 PASSED 00:04:05.312 passed 00:04:05.312 00:04:05.312 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.312 suites 1 1 n/a 0 0 00:04:05.312 tests 1 1 1 0 0 00:04:05.312 asserts 15 15 15 0 n/a 00:04:05.312 00:04:05.312 Elapsed time = 0.006 seconds 00:04:05.312 00:04:05.312 real 0m0.056s 00:04:05.312 user 0m0.021s 00:04:05.312 sys 0m0.035s 00:04:05.312 18:29:21 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.312 18:29:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 END TEST env_mem_callbacks 00:04:05.312 ************************************ 00:04:05.312 18:29:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.312 00:04:05.312 real 0m6.083s 00:04:05.312 user 0m4.255s 00:04:05.312 sys 0m0.905s 00:04:05.312 18:29:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.312 18:29:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 END TEST env 00:04:05.312 ************************************ 00:04:05.312 18:29:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:05.312 18:29:21 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:05.312 18:29:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.312 18:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.312 18:29:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.571 ************************************ 00:04:05.571 START TEST rpc 00:04:05.571 ************************************ 00:04:05.571 18:29:22 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:05.571 * Looking for test storage... 00:04:05.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.571 18:29:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=907186 00:04:05.571 18:29:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.571 18:29:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:05.571 18:29:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 907186 00:04:05.571 18:29:22 rpc -- common/autotest_common.sh@829 -- # '[' -z 907186 ']' 00:04:05.571 18:29:22 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.571 18:29:22 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.571 18:29:22 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.571 18:29:22 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.571 18:29:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.571 [2024-07-15 18:29:22.161722] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:05.571 [2024-07-15 18:29:22.161771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907186 ] 00:04:05.571 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.571 [2024-07-15 18:29:22.215648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.831 [2024-07-15 18:29:22.299822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:05.831 [2024-07-15 18:29:22.299856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 907186' to capture a snapshot of events at runtime. 00:04:05.831 [2024-07-15 18:29:22.299864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:05.831 [2024-07-15 18:29:22.299871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:05.831 [2024-07-15 18:29:22.299876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid907186 for offline analysis/debug. 00:04:05.831 [2024-07-15 18:29:22.299901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.400 18:29:22 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.400 18:29:22 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:06.400 18:29:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.400 18:29:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.400 18:29:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.400 18:29:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.400 18:29:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.400 18:29:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.400 18:29:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.400 ************************************ 00:04:06.400 START TEST rpc_integrity 00:04:06.400 ************************************ 00:04:06.400 18:29:22 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:06.400 18:29:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.400 18:29:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.400 18:29:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.400 18:29:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.400 18:29:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.400 18:29:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.400 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.400 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.400 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.400 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.400 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.400 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.400 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.400 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.400 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.400 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.400 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.400 { 00:04:06.400 "name": "Malloc0", 00:04:06.400 "aliases": [ 00:04:06.400 "d00b6224-dfa0-4192-96f7-58d6cafae338" 00:04:06.400 ], 00:04:06.400 "product_name": "Malloc disk", 00:04:06.400 "block_size": 512, 00:04:06.400 "num_blocks": 16384, 00:04:06.400 "uuid": "d00b6224-dfa0-4192-96f7-58d6cafae338", 00:04:06.400 "assigned_rate_limits": { 00:04:06.400 "rw_ios_per_sec": 0, 00:04:06.400 "rw_mbytes_per_sec": 0, 00:04:06.400 "r_mbytes_per_sec": 0, 00:04:06.400 "w_mbytes_per_sec": 0 00:04:06.400 }, 00:04:06.400 "claimed": false, 00:04:06.400 "zoned": false, 00:04:06.400 "supported_io_types": { 00:04:06.400 "read": true, 00:04:06.400 "write": true, 00:04:06.400 "unmap": true, 00:04:06.400 "flush": true, 00:04:06.400 "reset": true, 00:04:06.400 "nvme_admin": false, 00:04:06.400 "nvme_io": false, 00:04:06.400 "nvme_io_md": false, 00:04:06.400 "write_zeroes": true, 00:04:06.400 "zcopy": true, 00:04:06.400 "get_zone_info": false, 00:04:06.400 "zone_management": false, 00:04:06.400 "zone_append": false, 00:04:06.400 "compare": false, 00:04:06.400 "compare_and_write": false, 00:04:06.400 "abort": true, 00:04:06.400 "seek_hole": false, 00:04:06.400 "seek_data": false, 00:04:06.400 "copy": true, 00:04:06.400 "nvme_iov_md": false 00:04:06.400 }, 00:04:06.400 "memory_domains": [ 00:04:06.400 { 00:04:06.400 "dma_device_id": "system", 00:04:06.400 "dma_device_type": 1 00:04:06.400 }, 00:04:06.400 { 00:04:06.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.400 "dma_device_type": 2 00:04:06.400 } 00:04:06.400 ], 00:04:06.400 "driver_specific": {} 00:04:06.400 } 00:04:06.400 ]' 00:04:06.400 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.676 [2024-07-15 18:29:23.115386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.676 [2024-07-15 18:29:23.115416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.676 [2024-07-15 18:29:23.115431] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11772d0 00:04:06.676 [2024-07-15 18:29:23.115437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.676 [2024-07-15 18:29:23.116514] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.676 [2024-07-15 18:29:23.116536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.676 Passthru0 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.676 { 00:04:06.676 "name": "Malloc0", 00:04:06.676 "aliases": [ 00:04:06.676 "d00b6224-dfa0-4192-96f7-58d6cafae338" 00:04:06.676 ], 00:04:06.676 "product_name": "Malloc disk", 00:04:06.676 "block_size": 512, 00:04:06.676 "num_blocks": 16384, 00:04:06.676 "uuid": "d00b6224-dfa0-4192-96f7-58d6cafae338", 00:04:06.676 "assigned_rate_limits": { 00:04:06.676 "rw_ios_per_sec": 0, 00:04:06.676 "rw_mbytes_per_sec": 0, 00:04:06.676 "r_mbytes_per_sec": 0, 00:04:06.676 "w_mbytes_per_sec": 0 00:04:06.676 }, 00:04:06.676 "claimed": true, 00:04:06.676 "claim_type": "exclusive_write", 00:04:06.676 "zoned": false, 00:04:06.676 "supported_io_types": { 00:04:06.676 "read": true, 00:04:06.676 "write": true, 00:04:06.676 "unmap": true, 00:04:06.676 "flush": true, 00:04:06.676 "reset": true, 00:04:06.676 "nvme_admin": false, 00:04:06.676 "nvme_io": false, 00:04:06.676 "nvme_io_md": false, 00:04:06.676 "write_zeroes": true, 00:04:06.676 "zcopy": true, 00:04:06.676 "get_zone_info": false, 00:04:06.676 "zone_management": false, 00:04:06.676 "zone_append": false, 00:04:06.676 "compare": false, 00:04:06.676 "compare_and_write": false, 00:04:06.676 "abort": true, 00:04:06.676 "seek_hole": false, 00:04:06.676 "seek_data": false, 00:04:06.676 "copy": true, 00:04:06.676 "nvme_iov_md": false 00:04:06.676 }, 00:04:06.676 "memory_domains": [ 00:04:06.676 { 00:04:06.676 "dma_device_id": "system", 00:04:06.676 "dma_device_type": 1 00:04:06.676 }, 00:04:06.676 { 00:04:06.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.676 "dma_device_type": 2 00:04:06.676 } 00:04:06.676 ], 00:04:06.676 "driver_specific": {} 00:04:06.676 }, 00:04:06.676 { 00:04:06.676 "name": "Passthru0", 00:04:06.676 "aliases": [ 00:04:06.676 "5fd3ee4a-2663-5a79-9025-700921ba5415" 00:04:06.676 ], 00:04:06.676 "product_name": "passthru", 00:04:06.676 "block_size": 512, 00:04:06.676 "num_blocks": 16384, 00:04:06.676 "uuid": "5fd3ee4a-2663-5a79-9025-700921ba5415", 00:04:06.676 "assigned_rate_limits": { 00:04:06.676 "rw_ios_per_sec": 0, 00:04:06.676 "rw_mbytes_per_sec": 0, 00:04:06.676 "r_mbytes_per_sec": 0, 00:04:06.676 "w_mbytes_per_sec": 0 00:04:06.676 }, 00:04:06.676 "claimed": false, 00:04:06.676 "zoned": false, 00:04:06.676 "supported_io_types": { 00:04:06.676 "read": true, 00:04:06.676 "write": true, 00:04:06.676 "unmap": true, 00:04:06.676 "flush": true, 00:04:06.676 "reset": true, 00:04:06.676 "nvme_admin": false, 00:04:06.676 "nvme_io": false, 00:04:06.676 "nvme_io_md": false, 00:04:06.676 "write_zeroes": true, 00:04:06.676 "zcopy": true, 00:04:06.676 "get_zone_info": false, 00:04:06.676 "zone_management": false, 00:04:06.676 "zone_append": false, 00:04:06.676 "compare": false, 00:04:06.676 "compare_and_write": false, 00:04:06.676 "abort": true, 00:04:06.676 "seek_hole": false, 00:04:06.676 "seek_data": false, 00:04:06.676 "copy": true, 00:04:06.676 "nvme_iov_md": false 00:04:06.676 }, 00:04:06.676 "memory_domains": [ 00:04:06.676 { 00:04:06.676 "dma_device_id": "system", 00:04:06.676 "dma_device_type": 1 00:04:06.676 }, 00:04:06.676 { 00:04:06.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.676 "dma_device_type": 2 00:04:06.676 } 00:04:06.676 ], 00:04:06.676 "driver_specific": { 00:04:06.676 "passthru": { 00:04:06.676 "name": "Passthru0", 00:04:06.676 "base_bdev_name": "Malloc0" 00:04:06.676 } 00:04:06.676 } 00:04:06.676 } 00:04:06.676 ]' 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.676 18:29:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.676 00:04:06.676 real 0m0.266s 00:04:06.676 user 0m0.173s 00:04:06.676 sys 0m0.035s 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.676 18:29:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.677 ************************************ 00:04:06.677 END TEST rpc_integrity 00:04:06.677 ************************************ 00:04:06.677 18:29:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:06.677 18:29:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:06.677 18:29:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.677 18:29:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.677 18:29:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.677 ************************************ 00:04:06.677 START TEST rpc_plugins 00:04:06.677 ************************************ 00:04:06.677 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:06.677 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:06.677 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.677 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.677 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.677 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:06.677 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:06.677 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.677 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.677 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.677 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:06.677 { 00:04:06.677 "name": "Malloc1", 00:04:06.677 "aliases": [ 00:04:06.677 "7a445fd2-2732-4b42-98e4-dcb644fcfed6" 00:04:06.677 ], 00:04:06.677 "product_name": "Malloc disk", 00:04:06.677 "block_size": 4096, 00:04:06.677 "num_blocks": 256, 00:04:06.677 "uuid": "7a445fd2-2732-4b42-98e4-dcb644fcfed6", 00:04:06.677 "assigned_rate_limits": { 00:04:06.677 "rw_ios_per_sec": 0, 00:04:06.677 "rw_mbytes_per_sec": 0, 00:04:06.677 "r_mbytes_per_sec": 0, 00:04:06.677 "w_mbytes_per_sec": 0 00:04:06.677 }, 00:04:06.677 "claimed": false, 00:04:06.677 "zoned": false, 00:04:06.677 "supported_io_types": { 00:04:06.677 "read": true, 00:04:06.677 "write": true, 00:04:06.677 "unmap": true, 00:04:06.677 "flush": true, 00:04:06.677 "reset": true, 00:04:06.677 "nvme_admin": false, 00:04:06.677 "nvme_io": false, 00:04:06.677 "nvme_io_md": false, 00:04:06.677 "write_zeroes": true, 00:04:06.677 "zcopy": true, 00:04:06.677 "get_zone_info": false, 00:04:06.677 "zone_management": false, 00:04:06.677 "zone_append": false, 00:04:06.677 "compare": false, 00:04:06.677 "compare_and_write": false, 00:04:06.677 "abort": true, 00:04:06.677 "seek_hole": false, 00:04:06.677 "seek_data": false, 00:04:06.677 "copy": true, 00:04:06.677 "nvme_iov_md": false 00:04:06.677 }, 00:04:06.677 "memory_domains": [ 00:04:06.677 { 00:04:06.677 "dma_device_id": "system", 00:04:06.677 "dma_device_type": 1 00:04:06.677 }, 00:04:06.677 { 00:04:06.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.677 "dma_device_type": 2 00:04:06.677 } 00:04:06.677 ], 00:04:06.677 "driver_specific": {} 00:04:06.677 } 00:04:06.677 ]' 00:04:06.677 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:06.944 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:06.944 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.944 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.944 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:06.944 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:06.944 18:29:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:06.944 00:04:06.944 real 0m0.129s 00:04:06.944 user 0m0.080s 00:04:06.944 sys 0m0.021s 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.944 18:29:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.944 ************************************ 00:04:06.944 END TEST rpc_plugins 00:04:06.944 ************************************ 00:04:06.944 18:29:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:06.944 18:29:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:06.944 18:29:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.944 18:29:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.944 18:29:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.944 ************************************ 00:04:06.944 START TEST rpc_trace_cmd_test 00:04:06.944 ************************************ 00:04:06.944 18:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:06.944 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:06.944 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:06.944 18:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.944 18:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.944 18:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.944 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:06.944 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid907186", 00:04:06.944 "tpoint_group_mask": "0x8", 00:04:06.944 "iscsi_conn": { 00:04:06.944 "mask": "0x2", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "scsi": { 00:04:06.944 "mask": "0x4", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "bdev": { 00:04:06.944 "mask": "0x8", 00:04:06.944 "tpoint_mask": "0xffffffffffffffff" 00:04:06.944 }, 00:04:06.944 "nvmf_rdma": { 00:04:06.944 "mask": "0x10", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "nvmf_tcp": { 00:04:06.944 "mask": "0x20", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "ftl": { 00:04:06.944 "mask": "0x40", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "blobfs": { 00:04:06.944 "mask": "0x80", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "dsa": { 00:04:06.944 "mask": "0x200", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "thread": { 00:04:06.944 "mask": "0x400", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "nvme_pcie": { 00:04:06.944 "mask": "0x800", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "iaa": { 00:04:06.944 "mask": "0x1000", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "nvme_tcp": { 00:04:06.944 "mask": "0x2000", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "bdev_nvme": { 00:04:06.944 "mask": "0x4000", 00:04:06.944 "tpoint_mask": "0x0" 00:04:06.944 }, 00:04:06.944 "sock": { 00:04:06.945 "mask": "0x8000", 00:04:06.945 "tpoint_mask": "0x0" 00:04:06.945 } 00:04:06.945 }' 00:04:06.945 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:06.945 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:06.945 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:06.945 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:06.945 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:06.945 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:06.945 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.204 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.204 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.204 18:29:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.204 00:04:07.204 real 0m0.207s 00:04:07.204 user 0m0.183s 00:04:07.204 sys 0m0.018s 00:04:07.204 18:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.204 18:29:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.204 ************************************ 00:04:07.204 END TEST rpc_trace_cmd_test 00:04:07.204 ************************************ 00:04:07.204 18:29:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:07.204 18:29:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.204 18:29:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.204 18:29:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.204 18:29:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.204 18:29:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.204 18:29:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.204 ************************************ 00:04:07.204 START TEST rpc_daemon_integrity 00:04:07.204 ************************************ 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.204 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.204 { 00:04:07.204 "name": "Malloc2", 00:04:07.204 "aliases": [ 00:04:07.204 "92282e12-bac0-4e76-b5fd-4a42003e20e1" 00:04:07.204 ], 00:04:07.204 "product_name": "Malloc disk", 00:04:07.204 "block_size": 512, 00:04:07.204 "num_blocks": 16384, 00:04:07.204 "uuid": "92282e12-bac0-4e76-b5fd-4a42003e20e1", 00:04:07.204 "assigned_rate_limits": { 00:04:07.204 "rw_ios_per_sec": 0, 00:04:07.204 "rw_mbytes_per_sec": 0, 00:04:07.204 "r_mbytes_per_sec": 0, 00:04:07.204 "w_mbytes_per_sec": 0 00:04:07.204 }, 00:04:07.205 "claimed": false, 00:04:07.205 "zoned": false, 00:04:07.205 "supported_io_types": { 00:04:07.205 "read": true, 00:04:07.205 "write": true, 00:04:07.205 "unmap": true, 00:04:07.205 "flush": true, 00:04:07.205 "reset": true, 00:04:07.205 "nvme_admin": false, 00:04:07.205 "nvme_io": false, 00:04:07.205 "nvme_io_md": false, 00:04:07.205 "write_zeroes": true, 00:04:07.205 "zcopy": true, 00:04:07.205 "get_zone_info": false, 00:04:07.205 "zone_management": false, 00:04:07.205 "zone_append": false, 00:04:07.205 "compare": false, 00:04:07.205 "compare_and_write": false, 00:04:07.205 "abort": true, 00:04:07.205 "seek_hole": false, 00:04:07.205 "seek_data": false, 00:04:07.205 "copy": true, 00:04:07.205 "nvme_iov_md": false 00:04:07.205 }, 00:04:07.205 "memory_domains": [ 00:04:07.205 { 00:04:07.205 "dma_device_id": "system", 00:04:07.205 "dma_device_type": 1 00:04:07.205 }, 00:04:07.205 { 00:04:07.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.205 "dma_device_type": 2 00:04:07.205 } 00:04:07.205 ], 00:04:07.205 "driver_specific": {} 00:04:07.205 } 00:04:07.205 ]' 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.205 [2024-07-15 18:29:23.893515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.205 [2024-07-15 18:29:23.893545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.205 [2024-07-15 18:29:23.893559] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x130eac0 00:04:07.205 [2024-07-15 18:29:23.893565] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.205 [2024-07-15 18:29:23.894517] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.205 [2024-07-15 18:29:23.894537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.205 Passthru0 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.205 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.464 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.464 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.464 { 00:04:07.464 "name": "Malloc2", 00:04:07.464 "aliases": [ 00:04:07.464 "92282e12-bac0-4e76-b5fd-4a42003e20e1" 00:04:07.464 ], 00:04:07.464 "product_name": "Malloc disk", 00:04:07.464 "block_size": 512, 00:04:07.464 "num_blocks": 16384, 00:04:07.464 "uuid": "92282e12-bac0-4e76-b5fd-4a42003e20e1", 00:04:07.464 "assigned_rate_limits": { 00:04:07.464 "rw_ios_per_sec": 0, 00:04:07.464 "rw_mbytes_per_sec": 0, 00:04:07.464 "r_mbytes_per_sec": 0, 00:04:07.464 "w_mbytes_per_sec": 0 00:04:07.464 }, 00:04:07.464 "claimed": true, 00:04:07.464 "claim_type": "exclusive_write", 00:04:07.464 "zoned": false, 00:04:07.464 "supported_io_types": { 00:04:07.464 "read": true, 00:04:07.464 "write": true, 00:04:07.464 "unmap": true, 00:04:07.464 "flush": true, 00:04:07.464 "reset": true, 00:04:07.464 "nvme_admin": false, 00:04:07.464 "nvme_io": false, 00:04:07.464 "nvme_io_md": false, 00:04:07.464 "write_zeroes": true, 00:04:07.464 "zcopy": true, 00:04:07.464 "get_zone_info": false, 00:04:07.464 "zone_management": false, 00:04:07.464 "zone_append": false, 00:04:07.464 "compare": false, 00:04:07.464 "compare_and_write": false, 00:04:07.464 "abort": true, 00:04:07.464 "seek_hole": false, 00:04:07.464 "seek_data": false, 00:04:07.464 "copy": true, 00:04:07.464 "nvme_iov_md": false 00:04:07.464 }, 00:04:07.464 "memory_domains": [ 00:04:07.464 { 00:04:07.464 "dma_device_id": "system", 00:04:07.464 "dma_device_type": 1 00:04:07.464 }, 00:04:07.464 { 00:04:07.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.464 "dma_device_type": 2 00:04:07.464 } 00:04:07.464 ], 00:04:07.464 "driver_specific": {} 00:04:07.464 }, 00:04:07.464 { 00:04:07.464 "name": "Passthru0", 00:04:07.464 "aliases": [ 00:04:07.464 "9b9d35cd-ba2c-5919-89a4-cbfe990350ea" 00:04:07.464 ], 00:04:07.464 "product_name": "passthru", 00:04:07.464 "block_size": 512, 00:04:07.464 "num_blocks": 16384, 00:04:07.464 "uuid": "9b9d35cd-ba2c-5919-89a4-cbfe990350ea", 00:04:07.464 "assigned_rate_limits": { 00:04:07.464 "rw_ios_per_sec": 0, 00:04:07.464 "rw_mbytes_per_sec": 0, 00:04:07.464 "r_mbytes_per_sec": 0, 00:04:07.464 "w_mbytes_per_sec": 0 00:04:07.464 }, 00:04:07.464 "claimed": false, 00:04:07.464 "zoned": false, 00:04:07.464 "supported_io_types": { 00:04:07.464 "read": true, 00:04:07.464 "write": true, 00:04:07.465 "unmap": true, 00:04:07.465 "flush": true, 00:04:07.465 "reset": true, 00:04:07.465 "nvme_admin": false, 00:04:07.465 "nvme_io": false, 00:04:07.465 "nvme_io_md": false, 00:04:07.465 "write_zeroes": true, 00:04:07.465 "zcopy": true, 00:04:07.465 "get_zone_info": false, 00:04:07.465 "zone_management": false, 00:04:07.465 "zone_append": false, 00:04:07.465 "compare": false, 00:04:07.465 "compare_and_write": false, 00:04:07.465 "abort": true, 00:04:07.465 "seek_hole": false, 00:04:07.465 "seek_data": false, 00:04:07.465 "copy": true, 00:04:07.465 "nvme_iov_md": false 00:04:07.465 }, 00:04:07.465 "memory_domains": [ 00:04:07.465 { 00:04:07.465 "dma_device_id": "system", 00:04:07.465 "dma_device_type": 1 00:04:07.465 }, 00:04:07.465 { 00:04:07.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.465 "dma_device_type": 2 00:04:07.465 } 00:04:07.465 ], 00:04:07.465 "driver_specific": { 00:04:07.465 "passthru": { 00:04:07.465 "name": "Passthru0", 00:04:07.465 "base_bdev_name": "Malloc2" 00:04:07.465 } 00:04:07.465 } 00:04:07.465 } 00:04:07.465 ]' 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.465 18:29:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.465 18:29:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.465 00:04:07.465 real 0m0.254s 00:04:07.465 user 0m0.171s 00:04:07.465 sys 0m0.029s 00:04:07.465 18:29:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.465 18:29:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.465 ************************************ 00:04:07.465 END TEST rpc_daemon_integrity 00:04:07.465 ************************************ 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:07.465 18:29:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.465 18:29:24 rpc -- rpc/rpc.sh@84 -- # killprocess 907186 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@948 -- # '[' -z 907186 ']' 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@952 -- # kill -0 907186 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@953 -- # uname 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 907186 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 907186' 00:04:07.465 killing process with pid 907186 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@967 -- # kill 907186 00:04:07.465 18:29:24 rpc -- common/autotest_common.sh@972 -- # wait 907186 00:04:07.724 00:04:07.724 real 0m2.384s 00:04:07.724 user 0m3.080s 00:04:07.724 sys 0m0.640s 00:04:07.724 18:29:24 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.724 18:29:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.724 ************************************ 00:04:07.724 END TEST rpc 00:04:07.724 ************************************ 00:04:07.994 18:29:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:07.994 18:29:24 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.994 18:29:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.994 18:29:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.994 18:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:07.994 ************************************ 00:04:07.994 START TEST skip_rpc 00:04:07.994 ************************************ 00:04:07.994 18:29:24 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:07.994 * Looking for test storage... 00:04:07.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.994 18:29:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.994 18:29:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:07.994 18:29:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:07.994 18:29:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.994 18:29:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.994 18:29:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.994 ************************************ 00:04:07.994 START TEST skip_rpc 00:04:07.994 ************************************ 00:04:07.994 18:29:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:07.994 18:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=907828 00:04:07.994 18:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.994 18:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:07.994 18:29:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:07.994 [2024-07-15 18:29:24.632935] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:07.994 [2024-07-15 18:29:24.632970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907828 ] 00:04:07.994 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.994 [2024-07-15 18:29:24.685527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.362 [2024-07-15 18:29:24.758345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 907828 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 907828 ']' 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 907828 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 907828 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 907828' 00:04:13.633 killing process with pid 907828 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 907828 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 907828 00:04:13.633 00:04:13.633 real 0m5.356s 00:04:13.633 user 0m5.137s 00:04:13.633 sys 0m0.243s 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.633 18:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.633 ************************************ 00:04:13.633 END TEST skip_rpc 00:04:13.633 ************************************ 00:04:13.633 18:29:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.633 18:29:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.633 18:29:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.633 18:29:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.633 18:29:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.633 ************************************ 00:04:13.633 START TEST skip_rpc_with_json 00:04:13.633 ************************************ 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=908772 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 908772 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 908772 ']' 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.633 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.633 [2024-07-15 18:29:30.051659] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:13.633 [2024-07-15 18:29:30.051704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908772 ] 00:04:13.633 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.633 [2024-07-15 18:29:30.104261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.633 [2024-07-15 18:29:30.183286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.201 [2024-07-15 18:29:30.872698] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:14.201 request: 00:04:14.201 { 00:04:14.201 "trtype": "tcp", 00:04:14.201 "method": "nvmf_get_transports", 00:04:14.201 "req_id": 1 00:04:14.201 } 00:04:14.201 Got JSON-RPC error response 00:04:14.201 response: 00:04:14.201 { 00:04:14.201 "code": -19, 00:04:14.201 "message": "No such device" 00:04:14.201 } 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.201 [2024-07-15 18:29:30.880785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.201 18:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.460 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.460 18:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.460 { 00:04:14.460 "subsystems": [ 00:04:14.460 { 00:04:14.460 "subsystem": "vfio_user_target", 00:04:14.460 "config": null 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "subsystem": "keyring", 00:04:14.460 "config": [] 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "subsystem": "iobuf", 00:04:14.460 "config": [ 00:04:14.460 { 00:04:14.460 "method": "iobuf_set_options", 00:04:14.460 "params": { 00:04:14.460 "small_pool_count": 8192, 00:04:14.460 "large_pool_count": 1024, 00:04:14.460 "small_bufsize": 8192, 00:04:14.460 "large_bufsize": 135168 00:04:14.460 } 00:04:14.460 } 00:04:14.460 ] 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "subsystem": "sock", 00:04:14.460 "config": [ 00:04:14.460 { 00:04:14.460 "method": "sock_set_default_impl", 00:04:14.460 "params": { 00:04:14.460 "impl_name": "posix" 00:04:14.460 } 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "method": "sock_impl_set_options", 00:04:14.460 "params": { 00:04:14.460 "impl_name": "ssl", 00:04:14.460 "recv_buf_size": 4096, 00:04:14.460 "send_buf_size": 4096, 00:04:14.460 "enable_recv_pipe": true, 00:04:14.460 "enable_quickack": false, 00:04:14.460 "enable_placement_id": 0, 00:04:14.460 "enable_zerocopy_send_server": true, 00:04:14.460 "enable_zerocopy_send_client": false, 00:04:14.460 "zerocopy_threshold": 0, 00:04:14.460 "tls_version": 0, 00:04:14.460 "enable_ktls": false 00:04:14.460 } 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "method": "sock_impl_set_options", 00:04:14.460 "params": { 00:04:14.460 "impl_name": "posix", 00:04:14.460 "recv_buf_size": 2097152, 00:04:14.460 "send_buf_size": 2097152, 00:04:14.460 "enable_recv_pipe": true, 00:04:14.460 "enable_quickack": false, 00:04:14.460 "enable_placement_id": 0, 00:04:14.460 "enable_zerocopy_send_server": true, 00:04:14.460 "enable_zerocopy_send_client": false, 00:04:14.460 "zerocopy_threshold": 0, 00:04:14.460 "tls_version": 0, 00:04:14.460 "enable_ktls": false 00:04:14.460 } 00:04:14.460 } 00:04:14.460 ] 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "subsystem": "vmd", 00:04:14.460 "config": [] 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "subsystem": "accel", 00:04:14.460 "config": [ 00:04:14.460 { 00:04:14.460 "method": "accel_set_options", 00:04:14.460 "params": { 00:04:14.460 "small_cache_size": 128, 00:04:14.460 "large_cache_size": 16, 00:04:14.460 "task_count": 2048, 00:04:14.460 "sequence_count": 2048, 00:04:14.460 "buf_count": 2048 00:04:14.460 } 00:04:14.460 } 00:04:14.460 ] 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "subsystem": "bdev", 00:04:14.460 "config": [ 00:04:14.460 { 00:04:14.460 "method": "bdev_set_options", 00:04:14.460 "params": { 00:04:14.460 "bdev_io_pool_size": 65535, 00:04:14.460 "bdev_io_cache_size": 256, 00:04:14.460 "bdev_auto_examine": true, 00:04:14.460 "iobuf_small_cache_size": 128, 00:04:14.460 "iobuf_large_cache_size": 16 00:04:14.460 } 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "method": "bdev_raid_set_options", 00:04:14.460 "params": { 00:04:14.460 "process_window_size_kb": 1024 00:04:14.460 } 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "method": "bdev_iscsi_set_options", 00:04:14.460 "params": { 00:04:14.460 "timeout_sec": 30 00:04:14.460 } 00:04:14.460 }, 00:04:14.460 { 00:04:14.460 "method": "bdev_nvme_set_options", 00:04:14.460 "params": { 00:04:14.460 "action_on_timeout": "none", 00:04:14.460 "timeout_us": 0, 00:04:14.460 "timeout_admin_us": 0, 00:04:14.460 "keep_alive_timeout_ms": 10000, 00:04:14.460 "arbitration_burst": 0, 00:04:14.460 "low_priority_weight": 0, 00:04:14.460 "medium_priority_weight": 0, 00:04:14.460 "high_priority_weight": 0, 00:04:14.460 "nvme_adminq_poll_period_us": 10000, 00:04:14.461 "nvme_ioq_poll_period_us": 0, 00:04:14.461 "io_queue_requests": 0, 00:04:14.461 "delay_cmd_submit": true, 00:04:14.461 "transport_retry_count": 4, 00:04:14.461 "bdev_retry_count": 3, 00:04:14.461 "transport_ack_timeout": 0, 00:04:14.461 "ctrlr_loss_timeout_sec": 0, 00:04:14.461 "reconnect_delay_sec": 0, 00:04:14.461 "fast_io_fail_timeout_sec": 0, 00:04:14.461 "disable_auto_failback": false, 00:04:14.461 "generate_uuids": false, 00:04:14.461 "transport_tos": 0, 00:04:14.461 "nvme_error_stat": false, 00:04:14.461 "rdma_srq_size": 0, 00:04:14.461 "io_path_stat": false, 00:04:14.461 "allow_accel_sequence": false, 00:04:14.461 "rdma_max_cq_size": 0, 00:04:14.461 "rdma_cm_event_timeout_ms": 0, 00:04:14.461 "dhchap_digests": [ 00:04:14.461 "sha256", 00:04:14.461 "sha384", 00:04:14.461 "sha512" 00:04:14.461 ], 00:04:14.461 "dhchap_dhgroups": [ 00:04:14.461 "null", 00:04:14.461 "ffdhe2048", 00:04:14.461 "ffdhe3072", 00:04:14.461 "ffdhe4096", 00:04:14.461 "ffdhe6144", 00:04:14.461 "ffdhe8192" 00:04:14.461 ] 00:04:14.461 } 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "method": "bdev_nvme_set_hotplug", 00:04:14.461 "params": { 00:04:14.461 "period_us": 100000, 00:04:14.461 "enable": false 00:04:14.461 } 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "method": "bdev_wait_for_examine" 00:04:14.461 } 00:04:14.461 ] 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "scsi", 00:04:14.461 "config": null 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "scheduler", 00:04:14.461 "config": [ 00:04:14.461 { 00:04:14.461 "method": "framework_set_scheduler", 00:04:14.461 "params": { 00:04:14.461 "name": "static" 00:04:14.461 } 00:04:14.461 } 00:04:14.461 ] 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "vhost_scsi", 00:04:14.461 "config": [] 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "vhost_blk", 00:04:14.461 "config": [] 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "ublk", 00:04:14.461 "config": [] 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "nbd", 00:04:14.461 "config": [] 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "nvmf", 00:04:14.461 "config": [ 00:04:14.461 { 00:04:14.461 "method": "nvmf_set_config", 00:04:14.461 "params": { 00:04:14.461 "discovery_filter": "match_any", 00:04:14.461 "admin_cmd_passthru": { 00:04:14.461 "identify_ctrlr": false 00:04:14.461 } 00:04:14.461 } 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "method": "nvmf_set_max_subsystems", 00:04:14.461 "params": { 00:04:14.461 "max_subsystems": 1024 00:04:14.461 } 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "method": "nvmf_set_crdt", 00:04:14.461 "params": { 00:04:14.461 "crdt1": 0, 00:04:14.461 "crdt2": 0, 00:04:14.461 "crdt3": 0 00:04:14.461 } 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "method": "nvmf_create_transport", 00:04:14.461 "params": { 00:04:14.461 "trtype": "TCP", 00:04:14.461 "max_queue_depth": 128, 00:04:14.461 "max_io_qpairs_per_ctrlr": 127, 00:04:14.461 "in_capsule_data_size": 4096, 00:04:14.461 "max_io_size": 131072, 00:04:14.461 "io_unit_size": 131072, 00:04:14.461 "max_aq_depth": 128, 00:04:14.461 "num_shared_buffers": 511, 00:04:14.461 "buf_cache_size": 4294967295, 00:04:14.461 "dif_insert_or_strip": false, 00:04:14.461 "zcopy": false, 00:04:14.461 "c2h_success": true, 00:04:14.461 "sock_priority": 0, 00:04:14.461 "abort_timeout_sec": 1, 00:04:14.461 "ack_timeout": 0, 00:04:14.461 "data_wr_pool_size": 0 00:04:14.461 } 00:04:14.461 } 00:04:14.461 ] 00:04:14.461 }, 00:04:14.461 { 00:04:14.461 "subsystem": "iscsi", 00:04:14.461 "config": [ 00:04:14.461 { 00:04:14.461 "method": "iscsi_set_options", 00:04:14.461 "params": { 00:04:14.461 "node_base": "iqn.2016-06.io.spdk", 00:04:14.461 "max_sessions": 128, 00:04:14.461 "max_connections_per_session": 2, 00:04:14.461 "max_queue_depth": 64, 00:04:14.461 "default_time2wait": 2, 00:04:14.461 "default_time2retain": 20, 00:04:14.461 "first_burst_length": 8192, 00:04:14.461 "immediate_data": true, 00:04:14.461 "allow_duplicated_isid": false, 00:04:14.461 "error_recovery_level": 0, 00:04:14.461 "nop_timeout": 60, 00:04:14.461 "nop_in_interval": 30, 00:04:14.461 "disable_chap": false, 00:04:14.461 "require_chap": false, 00:04:14.461 "mutual_chap": false, 00:04:14.461 "chap_group": 0, 00:04:14.461 "max_large_datain_per_connection": 64, 00:04:14.461 "max_r2t_per_connection": 4, 00:04:14.461 "pdu_pool_size": 36864, 00:04:14.461 "immediate_data_pool_size": 16384, 00:04:14.461 "data_out_pool_size": 2048 00:04:14.461 } 00:04:14.461 } 00:04:14.461 ] 00:04:14.461 } 00:04:14.461 ] 00:04:14.461 } 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 908772 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 908772 ']' 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 908772 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 908772 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 908772' 00:04:14.461 killing process with pid 908772 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 908772 00:04:14.461 18:29:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 908772 00:04:14.720 18:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=909008 00:04:14.720 18:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:14.720 18:29:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 909008 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 909008 ']' 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 909008 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 909008 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 909008' 00:04:19.992 killing process with pid 909008 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 909008 00:04:19.992 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 909008 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:20.252 00:04:20.252 real 0m6.715s 00:04:20.252 user 0m6.551s 00:04:20.252 sys 0m0.559s 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.252 ************************************ 00:04:20.252 END TEST skip_rpc_with_json 00:04:20.252 ************************************ 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.252 18:29:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.252 ************************************ 00:04:20.252 START TEST skip_rpc_with_delay 00:04:20.252 ************************************ 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.252 [2024-07-15 18:29:36.835757] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.252 [2024-07-15 18:29:36.835813] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:20.252 00:04:20.252 real 0m0.059s 00:04:20.252 user 0m0.035s 00:04:20.252 sys 0m0.022s 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.252 18:29:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.252 ************************************ 00:04:20.252 END TEST skip_rpc_with_delay 00:04:20.252 ************************************ 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.252 18:29:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.252 18:29:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.252 18:29:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.252 18:29:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.252 ************************************ 00:04:20.252 START TEST exit_on_failed_rpc_init 00:04:20.252 ************************************ 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=909985 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 909985 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 909985 ']' 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.252 18:29:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.511 [2024-07-15 18:29:36.970680] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:20.512 [2024-07-15 18:29:36.970725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909985 ] 00:04:20.512 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.512 [2024-07-15 18:29:37.023374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.512 [2024-07-15 18:29:37.096189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:21.080 18:29:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.340 [2024-07-15 18:29:37.811858] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:21.340 [2024-07-15 18:29:37.811903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910218 ] 00:04:21.340 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.340 [2024-07-15 18:29:37.864459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.340 [2024-07-15 18:29:37.938315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.340 [2024-07-15 18:29:37.938401] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:21.340 [2024-07-15 18:29:37.938411] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:21.340 [2024-07-15 18:29:37.938417] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 909985 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 909985 ']' 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 909985 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.340 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 909985 00:04:21.599 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.599 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.599 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 909985' 00:04:21.599 killing process with pid 909985 00:04:21.599 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 909985 00:04:21.599 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 909985 00:04:21.859 00:04:21.859 real 0m1.451s 00:04:21.859 user 0m1.683s 00:04:21.859 sys 0m0.382s 00:04:21.859 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.859 18:29:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.859 ************************************ 00:04:21.859 END TEST exit_on_failed_rpc_init 00:04:21.859 ************************************ 00:04:21.859 18:29:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.859 18:29:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.859 00:04:21.859 real 0m13.928s 00:04:21.859 user 0m13.538s 00:04:21.859 sys 0m1.444s 00:04:21.859 18:29:38 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.859 18:29:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.859 ************************************ 00:04:21.859 END TEST skip_rpc 00:04:21.859 ************************************ 00:04:21.859 18:29:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.859 18:29:38 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.859 18:29:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.859 18:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.859 18:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:21.859 ************************************ 00:04:21.859 START TEST rpc_client 00:04:21.859 ************************************ 00:04:21.859 18:29:38 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.859 * Looking for test storage... 00:04:21.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:21.859 18:29:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:22.119 OK 00:04:22.119 18:29:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:22.119 00:04:22.119 real 0m0.107s 00:04:22.119 user 0m0.047s 00:04:22.119 sys 0m0.068s 00:04:22.119 18:29:38 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.119 18:29:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:22.119 ************************************ 00:04:22.119 END TEST rpc_client 00:04:22.119 ************************************ 00:04:22.119 18:29:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:22.119 18:29:38 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:22.119 18:29:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.119 18:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.119 18:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:22.119 ************************************ 00:04:22.119 START TEST json_config 00:04:22.119 ************************************ 00:04:22.119 18:29:38 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:22.119 18:29:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.119 18:29:38 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.119 18:29:38 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.119 18:29:38 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.119 18:29:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.119 18:29:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.119 18:29:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.119 18:29:38 json_config -- paths/export.sh@5 -- # export PATH 00:04:22.119 18:29:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@47 -- # : 0 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:22.119 18:29:38 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:22.119 18:29:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:22.119 18:29:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:22.119 18:29:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:22.119 18:29:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:22.119 18:29:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:22.119 18:29:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:22.120 INFO: JSON configuration test init 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.120 18:29:38 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:22.120 18:29:38 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.120 18:29:38 json_config -- json_config/common.sh@10 -- # shift 00:04:22.120 18:29:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.120 18:29:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.120 18:29:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.120 18:29:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.120 18:29:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.120 18:29:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=910390 00:04:22.120 18:29:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.120 Waiting for target to run... 00:04:22.120 18:29:38 json_config -- json_config/common.sh@25 -- # waitforlisten 910390 /var/tmp/spdk_tgt.sock 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@829 -- # '[' -z 910390 ']' 00:04:22.120 18:29:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.120 18:29:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.120 [2024-07-15 18:29:38.789180] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:22.120 [2024-07-15 18:29:38.789244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910390 ] 00:04:22.120 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.409 [2024-07-15 18:29:39.070453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.669 [2024-07-15 18:29:39.138032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.928 18:29:39 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:22.928 18:29:39 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:22.928 18:29:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:22.928 00:04:22.928 18:29:39 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:22.928 18:29:39 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:22.928 18:29:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.928 18:29:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.929 18:29:39 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:22.929 18:29:39 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:22.929 18:29:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.929 18:29:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.929 18:29:39 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:22.929 18:29:39 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:22.929 18:29:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.222 18:29:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.222 18:29:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:26.222 18:29:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:26.222 18:29:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.222 18:29:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:26.222 18:29:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.222 18:29:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:26.222 18:29:42 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.222 18:29:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.482 MallocForNvmf0 00:04:26.482 18:29:43 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.482 18:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.740 MallocForNvmf1 00:04:26.740 18:29:43 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.740 18:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:26.740 [2024-07-15 18:29:43.412751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.740 18:29:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:26.740 18:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:26.998 18:29:43 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:26.998 18:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.257 18:29:43 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.257 18:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.257 18:29:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.257 18:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.516 [2024-07-15 18:29:44.078847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.516 18:29:44 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:27.516 18:29:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.516 18:29:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.516 18:29:44 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:27.516 18:29:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.516 18:29:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.516 18:29:44 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:27.516 18:29:44 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.516 18:29:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.775 MallocBdevForConfigChangeCheck 00:04:27.775 18:29:44 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:27.775 18:29:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.775 18:29:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.775 18:29:44 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:27.775 18:29:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.034 18:29:44 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:28.034 INFO: shutting down applications... 00:04:28.034 18:29:44 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:28.034 18:29:44 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:28.034 18:29:44 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:28.034 18:29:44 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.942 Calling clear_iscsi_subsystem 00:04:29.942 Calling clear_nvmf_subsystem 00:04:29.942 Calling clear_nbd_subsystem 00:04:29.942 Calling clear_ublk_subsystem 00:04:29.942 Calling clear_vhost_blk_subsystem 00:04:29.942 Calling clear_vhost_scsi_subsystem 00:04:29.942 Calling clear_bdev_subsystem 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@345 -- # break 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:29.942 18:29:46 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:29.942 18:29:46 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.942 18:29:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.942 18:29:46 json_config -- json_config/common.sh@35 -- # [[ -n 910390 ]] 00:04:29.942 18:29:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 910390 00:04:29.942 18:29:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.942 18:29:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.942 18:29:46 json_config -- json_config/common.sh@41 -- # kill -0 910390 00:04:29.942 18:29:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.573 18:29:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.573 18:29:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.573 18:29:47 json_config -- json_config/common.sh@41 -- # kill -0 910390 00:04:30.573 18:29:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.573 18:29:47 json_config -- json_config/common.sh@43 -- # break 00:04:30.573 18:29:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.573 18:29:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.573 SPDK target shutdown done 00:04:30.573 18:29:47 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:30.573 INFO: relaunching applications... 00:04:30.573 18:29:47 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.573 18:29:47 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.573 18:29:47 json_config -- json_config/common.sh@10 -- # shift 00:04:30.573 18:29:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.573 18:29:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.573 18:29:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.573 18:29:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.573 18:29:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.573 18:29:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=911958 00:04:30.573 18:29:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.573 Waiting for target to run... 00:04:30.573 18:29:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.573 18:29:47 json_config -- json_config/common.sh@25 -- # waitforlisten 911958 /var/tmp/spdk_tgt.sock 00:04:30.573 18:29:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 911958 ']' 00:04:30.573 18:29:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.573 18:29:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.573 18:29:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.573 18:29:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.573 18:29:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.573 [2024-07-15 18:29:47.124359] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:30.573 [2024-07-15 18:29:47.124409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911958 ] 00:04:30.573 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.142 [2024-07-15 18:29:47.553069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.142 [2024-07-15 18:29:47.643573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.446 [2024-07-15 18:29:50.655804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.446 [2024-07-15 18:29:50.688099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.705 18:29:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.705 18:29:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:34.705 18:29:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.705 00:04:34.705 18:29:51 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:34.705 18:29:51 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.705 INFO: Checking if target configuration is the same... 00:04:34.705 18:29:51 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.705 18:29:51 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:34.705 18:29:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.705 + '[' 2 -ne 2 ']' 00:04:34.705 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.705 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.705 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.705 +++ basename /dev/fd/62 00:04:34.705 ++ mktemp /tmp/62.XXX 00:04:34.705 + tmp_file_1=/tmp/62.HJH 00:04:34.705 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.705 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.705 + tmp_file_2=/tmp/spdk_tgt_config.json.uYj 00:04:34.705 + ret=0 00:04:34.705 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.965 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.965 + diff -u /tmp/62.HJH /tmp/spdk_tgt_config.json.uYj 00:04:34.965 + echo 'INFO: JSON config files are the same' 00:04:34.965 INFO: JSON config files are the same 00:04:34.965 + rm /tmp/62.HJH /tmp/spdk_tgt_config.json.uYj 00:04:34.965 + exit 0 00:04:34.965 18:29:51 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:34.965 18:29:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.965 INFO: changing configuration and checking if this can be detected... 00:04:34.965 18:29:51 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.965 18:29:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.245 18:29:51 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.245 18:29:51 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:35.245 18:29:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.245 + '[' 2 -ne 2 ']' 00:04:35.245 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.245 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:35.245 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:35.245 +++ basename /dev/fd/62 00:04:35.245 ++ mktemp /tmp/62.XXX 00:04:35.245 + tmp_file_1=/tmp/62.WEV 00:04:35.245 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.245 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.245 + tmp_file_2=/tmp/spdk_tgt_config.json.4jG 00:04:35.245 + ret=0 00:04:35.245 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.504 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.504 + diff -u /tmp/62.WEV /tmp/spdk_tgt_config.json.4jG 00:04:35.504 + ret=1 00:04:35.504 + echo '=== Start of file: /tmp/62.WEV ===' 00:04:35.504 + cat /tmp/62.WEV 00:04:35.504 + echo '=== End of file: /tmp/62.WEV ===' 00:04:35.504 + echo '' 00:04:35.504 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4jG ===' 00:04:35.504 + cat /tmp/spdk_tgt_config.json.4jG 00:04:35.504 + echo '=== End of file: /tmp/spdk_tgt_config.json.4jG ===' 00:04:35.504 + echo '' 00:04:35.504 + rm /tmp/62.WEV /tmp/spdk_tgt_config.json.4jG 00:04:35.504 + exit 1 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:35.504 INFO: configuration change detected. 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:35.504 18:29:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.504 18:29:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@317 -- # [[ -n 911958 ]] 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.504 18:29:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.504 18:29:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:35.504 18:29:52 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.504 18:29:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.504 18:29:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.763 18:29:52 json_config -- json_config/json_config.sh@323 -- # killprocess 911958 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@948 -- # '[' -z 911958 ']' 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@952 -- # kill -0 911958 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@953 -- # uname 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 911958 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 911958' 00:04:35.763 killing process with pid 911958 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@967 -- # kill 911958 00:04:35.763 18:29:52 json_config -- common/autotest_common.sh@972 -- # wait 911958 00:04:37.187 18:29:53 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.187 18:29:53 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:37.187 18:29:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.187 18:29:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 18:29:53 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:37.187 18:29:53 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:37.187 INFO: Success 00:04:37.187 00:04:37.187 real 0m15.138s 00:04:37.187 user 0m15.933s 00:04:37.187 sys 0m1.819s 00:04:37.187 18:29:53 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.187 18:29:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 ************************************ 00:04:37.187 END TEST json_config 00:04:37.187 ************************************ 00:04:37.187 18:29:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.187 18:29:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:37.187 18:29:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.187 18:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.187 18:29:53 -- common/autotest_common.sh@10 -- # set +x 00:04:37.187 ************************************ 00:04:37.187 START TEST json_config_extra_key 00:04:37.187 ************************************ 00:04:37.187 18:29:53 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:37.187 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:37.187 18:29:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:37.447 18:29:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.447 18:29:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.447 18:29:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.447 18:29:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.447 18:29:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.447 18:29:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.447 18:29:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:37.447 18:29:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:37.447 18:29:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:37.447 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:37.447 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:37.447 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:37.447 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:37.447 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:37.447 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:37.447 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:37.448 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:37.448 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:37.448 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.448 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:37.448 INFO: launching applications... 00:04:37.448 18:29:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=913236 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.448 Waiting for target to run... 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 913236 /var/tmp/spdk_tgt.sock 00:04:37.448 18:29:53 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 913236 ']' 00:04:37.448 18:29:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:37.448 18:29:53 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.448 18:29:53 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.448 18:29:53 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.448 18:29:53 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.448 18:29:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.448 [2024-07-15 18:29:53.972187] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:37.448 [2024-07-15 18:29:53.972241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913236 ] 00:04:37.448 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.707 [2024-07-15 18:29:54.230560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.707 [2024-07-15 18:29:54.298256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.274 18:29:54 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.274 18:29:54 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:38.274 00:04:38.274 18:29:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:38.274 INFO: shutting down applications... 00:04:38.274 18:29:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 913236 ]] 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 913236 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 913236 00:04:38.274 18:29:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.842 18:29:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.842 18:29:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.842 18:29:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 913236 00:04:38.842 18:29:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.842 18:29:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:38.842 18:29:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.842 18:29:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.842 SPDK target shutdown done 00:04:38.842 18:29:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:38.842 Success 00:04:38.842 00:04:38.842 real 0m1.444s 00:04:38.842 user 0m1.241s 00:04:38.842 sys 0m0.362s 00:04:38.842 18:29:55 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.842 18:29:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.842 ************************************ 00:04:38.842 END TEST json_config_extra_key 00:04:38.842 ************************************ 00:04:38.842 18:29:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.842 18:29:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.842 18:29:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.842 18:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.842 18:29:55 -- common/autotest_common.sh@10 -- # set +x 00:04:38.842 ************************************ 00:04:38.842 START TEST alias_rpc 00:04:38.842 ************************************ 00:04:38.842 18:29:55 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.842 * Looking for test storage... 00:04:38.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:38.842 18:29:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.842 18:29:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=913606 00:04:38.842 18:29:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 913606 00:04:38.842 18:29:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.842 18:29:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 913606 ']' 00:04:38.842 18:29:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.842 18:29:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.842 18:29:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.842 18:29:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.842 18:29:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.842 [2024-07-15 18:29:55.468198] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:38.842 [2024-07-15 18:29:55.468269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913606 ] 00:04:38.842 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.842 [2024-07-15 18:29:55.521841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.101 [2024-07-15 18:29:55.602448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.669 18:29:56 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.669 18:29:56 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:39.669 18:29:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:39.928 18:29:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 913606 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 913606 ']' 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 913606 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 913606 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 913606' 00:04:39.928 killing process with pid 913606 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@967 -- # kill 913606 00:04:39.928 18:29:56 alias_rpc -- common/autotest_common.sh@972 -- # wait 913606 00:04:40.188 00:04:40.188 real 0m1.489s 00:04:40.188 user 0m1.632s 00:04:40.188 sys 0m0.399s 00:04:40.188 18:29:56 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.188 18:29:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.188 ************************************ 00:04:40.188 END TEST alias_rpc 00:04:40.188 ************************************ 00:04:40.188 18:29:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.188 18:29:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:40.188 18:29:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.188 18:29:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.188 18:29:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.188 18:29:56 -- common/autotest_common.sh@10 -- # set +x 00:04:40.188 ************************************ 00:04:40.188 START TEST spdkcli_tcp 00:04:40.188 ************************************ 00:04:40.188 18:29:56 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.448 * Looking for test storage... 00:04:40.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=913901 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 913901 00:04:40.448 18:29:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 913901 ']' 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.448 18:29:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.448 [2024-07-15 18:29:57.023203] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:40.448 [2024-07-15 18:29:57.023252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913901 ] 00:04:40.448 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.448 [2024-07-15 18:29:57.076204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.448 [2024-07-15 18:29:57.149796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.448 [2024-07-15 18:29:57.149799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.386 18:29:57 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.386 18:29:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:41.386 18:29:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:41.386 18:29:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=913919 00:04:41.387 18:29:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.387 [ 00:04:41.387 "bdev_malloc_delete", 00:04:41.387 "bdev_malloc_create", 00:04:41.387 "bdev_null_resize", 00:04:41.387 "bdev_null_delete", 00:04:41.387 "bdev_null_create", 00:04:41.387 "bdev_nvme_cuse_unregister", 00:04:41.387 "bdev_nvme_cuse_register", 00:04:41.387 "bdev_opal_new_user", 00:04:41.387 "bdev_opal_set_lock_state", 00:04:41.387 "bdev_opal_delete", 00:04:41.387 "bdev_opal_get_info", 00:04:41.387 "bdev_opal_create", 00:04:41.387 "bdev_nvme_opal_revert", 00:04:41.387 "bdev_nvme_opal_init", 00:04:41.387 "bdev_nvme_send_cmd", 00:04:41.387 "bdev_nvme_get_path_iostat", 00:04:41.387 "bdev_nvme_get_mdns_discovery_info", 00:04:41.387 "bdev_nvme_stop_mdns_discovery", 00:04:41.387 "bdev_nvme_start_mdns_discovery", 00:04:41.387 "bdev_nvme_set_multipath_policy", 00:04:41.387 "bdev_nvme_set_preferred_path", 00:04:41.387 "bdev_nvme_get_io_paths", 00:04:41.387 "bdev_nvme_remove_error_injection", 00:04:41.387 "bdev_nvme_add_error_injection", 00:04:41.387 "bdev_nvme_get_discovery_info", 00:04:41.387 "bdev_nvme_stop_discovery", 00:04:41.387 "bdev_nvme_start_discovery", 00:04:41.387 "bdev_nvme_get_controller_health_info", 00:04:41.387 "bdev_nvme_disable_controller", 00:04:41.387 "bdev_nvme_enable_controller", 00:04:41.387 "bdev_nvme_reset_controller", 00:04:41.387 "bdev_nvme_get_transport_statistics", 00:04:41.387 "bdev_nvme_apply_firmware", 00:04:41.387 "bdev_nvme_detach_controller", 00:04:41.387 "bdev_nvme_get_controllers", 00:04:41.387 "bdev_nvme_attach_controller", 00:04:41.387 "bdev_nvme_set_hotplug", 00:04:41.387 "bdev_nvme_set_options", 00:04:41.387 "bdev_passthru_delete", 00:04:41.387 "bdev_passthru_create", 00:04:41.387 "bdev_lvol_set_parent_bdev", 00:04:41.387 "bdev_lvol_set_parent", 00:04:41.387 "bdev_lvol_check_shallow_copy", 00:04:41.387 "bdev_lvol_start_shallow_copy", 00:04:41.387 "bdev_lvol_grow_lvstore", 00:04:41.387 "bdev_lvol_get_lvols", 00:04:41.387 "bdev_lvol_get_lvstores", 00:04:41.387 "bdev_lvol_delete", 00:04:41.387 "bdev_lvol_set_read_only", 00:04:41.387 "bdev_lvol_resize", 00:04:41.387 "bdev_lvol_decouple_parent", 00:04:41.387 "bdev_lvol_inflate", 00:04:41.387 "bdev_lvol_rename", 00:04:41.387 "bdev_lvol_clone_bdev", 00:04:41.387 "bdev_lvol_clone", 00:04:41.387 "bdev_lvol_snapshot", 00:04:41.387 "bdev_lvol_create", 00:04:41.387 "bdev_lvol_delete_lvstore", 00:04:41.387 "bdev_lvol_rename_lvstore", 00:04:41.387 "bdev_lvol_create_lvstore", 00:04:41.387 "bdev_raid_set_options", 00:04:41.387 "bdev_raid_remove_base_bdev", 00:04:41.387 "bdev_raid_add_base_bdev", 00:04:41.387 "bdev_raid_delete", 00:04:41.387 "bdev_raid_create", 00:04:41.387 "bdev_raid_get_bdevs", 00:04:41.387 "bdev_error_inject_error", 00:04:41.387 "bdev_error_delete", 00:04:41.387 "bdev_error_create", 00:04:41.387 "bdev_split_delete", 00:04:41.387 "bdev_split_create", 00:04:41.387 "bdev_delay_delete", 00:04:41.387 "bdev_delay_create", 00:04:41.387 "bdev_delay_update_latency", 00:04:41.387 "bdev_zone_block_delete", 00:04:41.387 "bdev_zone_block_create", 00:04:41.387 "blobfs_create", 00:04:41.387 "blobfs_detect", 00:04:41.387 "blobfs_set_cache_size", 00:04:41.387 "bdev_aio_delete", 00:04:41.387 "bdev_aio_rescan", 00:04:41.387 "bdev_aio_create", 00:04:41.387 "bdev_ftl_set_property", 00:04:41.387 "bdev_ftl_get_properties", 00:04:41.387 "bdev_ftl_get_stats", 00:04:41.387 "bdev_ftl_unmap", 00:04:41.387 "bdev_ftl_unload", 00:04:41.387 "bdev_ftl_delete", 00:04:41.387 "bdev_ftl_load", 00:04:41.387 "bdev_ftl_create", 00:04:41.387 "bdev_virtio_attach_controller", 00:04:41.387 "bdev_virtio_scsi_get_devices", 00:04:41.387 "bdev_virtio_detach_controller", 00:04:41.387 "bdev_virtio_blk_set_hotplug", 00:04:41.387 "bdev_iscsi_delete", 00:04:41.387 "bdev_iscsi_create", 00:04:41.387 "bdev_iscsi_set_options", 00:04:41.387 "accel_error_inject_error", 00:04:41.387 "ioat_scan_accel_module", 00:04:41.387 "dsa_scan_accel_module", 00:04:41.387 "iaa_scan_accel_module", 00:04:41.387 "vfu_virtio_create_scsi_endpoint", 00:04:41.387 "vfu_virtio_scsi_remove_target", 00:04:41.387 "vfu_virtio_scsi_add_target", 00:04:41.387 "vfu_virtio_create_blk_endpoint", 00:04:41.387 "vfu_virtio_delete_endpoint", 00:04:41.387 "keyring_file_remove_key", 00:04:41.387 "keyring_file_add_key", 00:04:41.387 "keyring_linux_set_options", 00:04:41.387 "iscsi_get_histogram", 00:04:41.387 "iscsi_enable_histogram", 00:04:41.387 "iscsi_set_options", 00:04:41.387 "iscsi_get_auth_groups", 00:04:41.387 "iscsi_auth_group_remove_secret", 00:04:41.387 "iscsi_auth_group_add_secret", 00:04:41.387 "iscsi_delete_auth_group", 00:04:41.387 "iscsi_create_auth_group", 00:04:41.387 "iscsi_set_discovery_auth", 00:04:41.387 "iscsi_get_options", 00:04:41.387 "iscsi_target_node_request_logout", 00:04:41.387 "iscsi_target_node_set_redirect", 00:04:41.387 "iscsi_target_node_set_auth", 00:04:41.387 "iscsi_target_node_add_lun", 00:04:41.387 "iscsi_get_stats", 00:04:41.387 "iscsi_get_connections", 00:04:41.387 "iscsi_portal_group_set_auth", 00:04:41.387 "iscsi_start_portal_group", 00:04:41.387 "iscsi_delete_portal_group", 00:04:41.387 "iscsi_create_portal_group", 00:04:41.387 "iscsi_get_portal_groups", 00:04:41.387 "iscsi_delete_target_node", 00:04:41.387 "iscsi_target_node_remove_pg_ig_maps", 00:04:41.387 "iscsi_target_node_add_pg_ig_maps", 00:04:41.387 "iscsi_create_target_node", 00:04:41.387 "iscsi_get_target_nodes", 00:04:41.387 "iscsi_delete_initiator_group", 00:04:41.387 "iscsi_initiator_group_remove_initiators", 00:04:41.387 "iscsi_initiator_group_add_initiators", 00:04:41.387 "iscsi_create_initiator_group", 00:04:41.387 "iscsi_get_initiator_groups", 00:04:41.387 "nvmf_set_crdt", 00:04:41.387 "nvmf_set_config", 00:04:41.387 "nvmf_set_max_subsystems", 00:04:41.387 "nvmf_stop_mdns_prr", 00:04:41.387 "nvmf_publish_mdns_prr", 00:04:41.387 "nvmf_subsystem_get_listeners", 00:04:41.387 "nvmf_subsystem_get_qpairs", 00:04:41.387 "nvmf_subsystem_get_controllers", 00:04:41.387 "nvmf_get_stats", 00:04:41.387 "nvmf_get_transports", 00:04:41.387 "nvmf_create_transport", 00:04:41.387 "nvmf_get_targets", 00:04:41.387 "nvmf_delete_target", 00:04:41.387 "nvmf_create_target", 00:04:41.387 "nvmf_subsystem_allow_any_host", 00:04:41.387 "nvmf_subsystem_remove_host", 00:04:41.388 "nvmf_subsystem_add_host", 00:04:41.388 "nvmf_ns_remove_host", 00:04:41.388 "nvmf_ns_add_host", 00:04:41.388 "nvmf_subsystem_remove_ns", 00:04:41.388 "nvmf_subsystem_add_ns", 00:04:41.388 "nvmf_subsystem_listener_set_ana_state", 00:04:41.388 "nvmf_discovery_get_referrals", 00:04:41.388 "nvmf_discovery_remove_referral", 00:04:41.388 "nvmf_discovery_add_referral", 00:04:41.388 "nvmf_subsystem_remove_listener", 00:04:41.388 "nvmf_subsystem_add_listener", 00:04:41.388 "nvmf_delete_subsystem", 00:04:41.388 "nvmf_create_subsystem", 00:04:41.388 "nvmf_get_subsystems", 00:04:41.388 "env_dpdk_get_mem_stats", 00:04:41.388 "nbd_get_disks", 00:04:41.388 "nbd_stop_disk", 00:04:41.388 "nbd_start_disk", 00:04:41.388 "ublk_recover_disk", 00:04:41.388 "ublk_get_disks", 00:04:41.388 "ublk_stop_disk", 00:04:41.388 "ublk_start_disk", 00:04:41.388 "ublk_destroy_target", 00:04:41.388 "ublk_create_target", 00:04:41.388 "virtio_blk_create_transport", 00:04:41.388 "virtio_blk_get_transports", 00:04:41.388 "vhost_controller_set_coalescing", 00:04:41.388 "vhost_get_controllers", 00:04:41.388 "vhost_delete_controller", 00:04:41.388 "vhost_create_blk_controller", 00:04:41.388 "vhost_scsi_controller_remove_target", 00:04:41.388 "vhost_scsi_controller_add_target", 00:04:41.388 "vhost_start_scsi_controller", 00:04:41.388 "vhost_create_scsi_controller", 00:04:41.388 "thread_set_cpumask", 00:04:41.388 "framework_get_governor", 00:04:41.388 "framework_get_scheduler", 00:04:41.388 "framework_set_scheduler", 00:04:41.388 "framework_get_reactors", 00:04:41.388 "thread_get_io_channels", 00:04:41.388 "thread_get_pollers", 00:04:41.388 "thread_get_stats", 00:04:41.388 "framework_monitor_context_switch", 00:04:41.388 "spdk_kill_instance", 00:04:41.388 "log_enable_timestamps", 00:04:41.388 "log_get_flags", 00:04:41.388 "log_clear_flag", 00:04:41.388 "log_set_flag", 00:04:41.388 "log_get_level", 00:04:41.388 "log_set_level", 00:04:41.388 "log_get_print_level", 00:04:41.388 "log_set_print_level", 00:04:41.388 "framework_enable_cpumask_locks", 00:04:41.388 "framework_disable_cpumask_locks", 00:04:41.388 "framework_wait_init", 00:04:41.388 "framework_start_init", 00:04:41.388 "scsi_get_devices", 00:04:41.388 "bdev_get_histogram", 00:04:41.388 "bdev_enable_histogram", 00:04:41.388 "bdev_set_qos_limit", 00:04:41.388 "bdev_set_qd_sampling_period", 00:04:41.388 "bdev_get_bdevs", 00:04:41.388 "bdev_reset_iostat", 00:04:41.388 "bdev_get_iostat", 00:04:41.388 "bdev_examine", 00:04:41.388 "bdev_wait_for_examine", 00:04:41.388 "bdev_set_options", 00:04:41.388 "notify_get_notifications", 00:04:41.388 "notify_get_types", 00:04:41.388 "accel_get_stats", 00:04:41.388 "accel_set_options", 00:04:41.388 "accel_set_driver", 00:04:41.388 "accel_crypto_key_destroy", 00:04:41.388 "accel_crypto_keys_get", 00:04:41.388 "accel_crypto_key_create", 00:04:41.388 "accel_assign_opc", 00:04:41.388 "accel_get_module_info", 00:04:41.388 "accel_get_opc_assignments", 00:04:41.388 "vmd_rescan", 00:04:41.388 "vmd_remove_device", 00:04:41.388 "vmd_enable", 00:04:41.388 "sock_get_default_impl", 00:04:41.388 "sock_set_default_impl", 00:04:41.388 "sock_impl_set_options", 00:04:41.388 "sock_impl_get_options", 00:04:41.388 "iobuf_get_stats", 00:04:41.388 "iobuf_set_options", 00:04:41.388 "keyring_get_keys", 00:04:41.388 "framework_get_pci_devices", 00:04:41.388 "framework_get_config", 00:04:41.388 "framework_get_subsystems", 00:04:41.388 "vfu_tgt_set_base_path", 00:04:41.388 "trace_get_info", 00:04:41.388 "trace_get_tpoint_group_mask", 00:04:41.388 "trace_disable_tpoint_group", 00:04:41.388 "trace_enable_tpoint_group", 00:04:41.388 "trace_clear_tpoint_mask", 00:04:41.388 "trace_set_tpoint_mask", 00:04:41.388 "spdk_get_version", 00:04:41.388 "rpc_get_methods" 00:04:41.388 ] 00:04:41.388 18:29:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:41.388 18:29:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.388 18:29:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.388 18:29:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:41.388 18:29:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 913901 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 913901 ']' 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 913901 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 913901 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 913901' 00:04:41.388 killing process with pid 913901 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 913901 00:04:41.388 18:29:58 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 913901 00:04:41.965 00:04:41.965 real 0m1.482s 00:04:41.965 user 0m2.767s 00:04:41.965 sys 0m0.418s 00:04:41.965 18:29:58 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.965 18:29:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.965 ************************************ 00:04:41.965 END TEST spdkcli_tcp 00:04:41.965 ************************************ 00:04:41.965 18:29:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.965 18:29:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.965 18:29:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.965 18:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.965 18:29:58 -- common/autotest_common.sh@10 -- # set +x 00:04:41.965 ************************************ 00:04:41.965 START TEST dpdk_mem_utility 00:04:41.965 ************************************ 00:04:41.965 18:29:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.965 * Looking for test storage... 00:04:41.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:41.965 18:29:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.965 18:29:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=914197 00:04:41.965 18:29:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 914197 00:04:41.965 18:29:58 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 914197 ']' 00:04:41.965 18:29:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.965 18:29:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.965 18:29:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.965 18:29:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.965 18:29:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.965 18:29:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.965 [2024-07-15 18:29:58.570109] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:41.965 [2024-07-15 18:29:58.570155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914197 ] 00:04:41.965 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.965 [2024-07-15 18:29:58.624345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.224 [2024-07-15 18:29:58.703912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.793 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.793 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:42.793 18:29:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.793 18:29:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.793 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.793 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.793 { 00:04:42.793 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.793 } 00:04:42.793 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.793 18:29:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:42.793 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:42.793 1 heaps totaling size 814.000000 MiB 00:04:42.793 size: 814.000000 MiB heap id: 0 00:04:42.794 end heaps---------- 00:04:42.794 8 mempools totaling size 598.116089 MiB 00:04:42.794 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.794 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.794 size: 84.521057 MiB name: bdev_io_914197 00:04:42.794 size: 51.011292 MiB name: evtpool_914197 00:04:42.794 size: 50.003479 MiB name: msgpool_914197 00:04:42.794 size: 21.763794 MiB name: PDU_Pool 00:04:42.794 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.794 size: 0.026123 MiB name: Session_Pool 00:04:42.794 end mempools------- 00:04:42.794 6 memzones totaling size 4.142822 MiB 00:04:42.794 size: 1.000366 MiB name: RG_ring_0_914197 00:04:42.794 size: 1.000366 MiB name: RG_ring_1_914197 00:04:42.794 size: 1.000366 MiB name: RG_ring_4_914197 00:04:42.794 size: 1.000366 MiB name: RG_ring_5_914197 00:04:42.794 size: 0.125366 MiB name: RG_ring_2_914197 00:04:42.794 size: 0.015991 MiB name: RG_ring_3_914197 00:04:42.794 end memzones------- 00:04:42.794 18:29:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.794 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:42.794 list of free elements. size: 12.519348 MiB 00:04:42.794 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:42.794 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:42.794 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:42.794 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:42.794 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:42.794 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:42.794 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:42.794 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:42.794 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:42.794 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:42.794 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:42.794 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:42.794 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:42.794 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:42.794 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:42.794 list of standard malloc elements. size: 199.218079 MiB 00:04:42.794 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:42.794 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:42.794 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:42.794 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:42.794 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:42.794 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.794 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:42.794 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.794 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:42.794 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.794 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:42.794 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:42.794 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:42.794 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:42.794 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:42.794 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:42.794 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:42.794 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:42.794 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:42.794 list of memzone associated elements. size: 602.262573 MiB 00:04:42.794 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:42.794 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.794 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:42.794 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.794 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:42.794 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_914197_0 00:04:42.794 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:42.794 associated memzone info: size: 48.002930 MiB name: MP_evtpool_914197_0 00:04:42.794 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:42.794 associated memzone info: size: 48.002930 MiB name: MP_msgpool_914197_0 00:04:42.794 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:42.794 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.794 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:42.794 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.794 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:42.794 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_914197 00:04:42.794 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:42.794 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_914197 00:04:42.794 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.794 associated memzone info: size: 1.007996 MiB name: MP_evtpool_914197 00:04:42.794 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:42.794 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.794 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:42.794 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.794 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:42.794 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.794 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:42.794 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.794 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:42.794 associated memzone info: size: 1.000366 MiB name: RG_ring_0_914197 00:04:42.794 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:42.794 associated memzone info: size: 1.000366 MiB name: RG_ring_1_914197 00:04:42.794 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:42.794 associated memzone info: size: 1.000366 MiB name: RG_ring_4_914197 00:04:42.794 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:42.794 associated memzone info: size: 1.000366 MiB name: RG_ring_5_914197 00:04:42.794 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:42.794 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_914197 00:04:42.794 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:42.794 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.794 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:42.794 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.794 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:42.794 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.794 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:42.794 associated memzone info: size: 0.125366 MiB name: RG_ring_2_914197 00:04:42.795 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:42.795 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.795 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:42.795 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.795 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:42.795 associated memzone info: size: 0.015991 MiB name: RG_ring_3_914197 00:04:42.795 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:42.795 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.795 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:42.795 associated memzone info: size: 0.000183 MiB name: MP_msgpool_914197 00:04:42.795 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:42.795 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_914197 00:04:42.795 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:42.795 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.795 18:29:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.795 18:29:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 914197 00:04:42.795 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 914197 ']' 00:04:42.795 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 914197 00:04:42.795 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:42.795 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.795 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 914197 00:04:43.054 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.054 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.054 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 914197' 00:04:43.054 killing process with pid 914197 00:04:43.054 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 914197 00:04:43.054 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 914197 00:04:43.313 00:04:43.313 real 0m1.384s 00:04:43.313 user 0m1.463s 00:04:43.313 sys 0m0.384s 00:04:43.313 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.313 18:29:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.313 ************************************ 00:04:43.313 END TEST dpdk_mem_utility 00:04:43.313 ************************************ 00:04:43.313 18:29:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.313 18:29:59 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.313 18:29:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.313 18:29:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.313 18:29:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.313 ************************************ 00:04:43.313 START TEST event 00:04:43.313 ************************************ 00:04:43.313 18:29:59 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.313 * Looking for test storage... 00:04:43.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:43.314 18:29:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:43.314 18:29:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.314 18:29:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.314 18:29:59 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:43.314 18:29:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.314 18:29:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.314 ************************************ 00:04:43.314 START TEST event_perf 00:04:43.314 ************************************ 00:04:43.314 18:30:00 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.572 Running I/O for 1 seconds...[2024-07-15 18:30:00.028895] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:43.573 [2024-07-15 18:30:00.028967] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914493 ] 00:04:43.573 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.573 [2024-07-15 18:30:00.091318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.573 [2024-07-15 18:30:00.168768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.573 [2024-07-15 18:30:00.168868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.573 [2024-07-15 18:30:00.168976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.573 [2024-07-15 18:30:00.168977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.950 Running I/O for 1 seconds... 00:04:44.951 lcore 0: 210561 00:04:44.951 lcore 1: 210561 00:04:44.951 lcore 2: 210561 00:04:44.951 lcore 3: 210560 00:04:44.951 done. 00:04:44.951 00:04:44.951 real 0m1.232s 00:04:44.951 user 0m4.140s 00:04:44.951 sys 0m0.087s 00:04:44.951 18:30:01 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.951 18:30:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.951 ************************************ 00:04:44.951 END TEST event_perf 00:04:44.951 ************************************ 00:04:44.951 18:30:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:44.951 18:30:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:44.951 18:30:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:44.951 18:30:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.951 18:30:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.951 ************************************ 00:04:44.951 START TEST event_reactor 00:04:44.951 ************************************ 00:04:44.951 18:30:01 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:44.951 [2024-07-15 18:30:01.324149] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:44.951 [2024-07-15 18:30:01.324214] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914830 ] 00:04:44.951 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.951 [2024-07-15 18:30:01.380782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.951 [2024-07-15 18:30:01.452868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.886 test_start 00:04:45.886 oneshot 00:04:45.886 tick 100 00:04:45.886 tick 100 00:04:45.886 tick 250 00:04:45.886 tick 100 00:04:45.886 tick 100 00:04:45.886 tick 250 00:04:45.886 tick 100 00:04:45.886 tick 500 00:04:45.886 tick 100 00:04:45.886 tick 100 00:04:45.886 tick 250 00:04:45.886 tick 100 00:04:45.886 tick 100 00:04:45.886 test_end 00:04:45.886 00:04:45.886 real 0m1.219s 00:04:45.886 user 0m1.143s 00:04:45.886 sys 0m0.071s 00:04:45.886 18:30:02 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.886 18:30:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:45.886 ************************************ 00:04:45.886 END TEST event_reactor 00:04:45.886 ************************************ 00:04:45.886 18:30:02 event -- common/autotest_common.sh@1142 -- # return 0 00:04:45.886 18:30:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.886 18:30:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:45.886 18:30:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.886 18:30:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.886 ************************************ 00:04:45.886 START TEST event_reactor_perf 00:04:45.886 ************************************ 00:04:45.887 18:30:02 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.145 [2024-07-15 18:30:02.603639] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:46.145 [2024-07-15 18:30:02.603686] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915120 ] 00:04:46.145 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.145 [2024-07-15 18:30:02.656832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.145 [2024-07-15 18:30:02.727876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.126 test_start 00:04:47.126 test_end 00:04:47.126 Performance: 502275 events per second 00:04:47.126 00:04:47.126 real 0m1.209s 00:04:47.126 user 0m1.137s 00:04:47.126 sys 0m0.067s 00:04:47.126 18:30:03 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.126 18:30:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.126 ************************************ 00:04:47.126 END TEST event_reactor_perf 00:04:47.126 ************************************ 00:04:47.391 18:30:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:47.391 18:30:03 event -- event/event.sh@49 -- # uname -s 00:04:47.391 18:30:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.391 18:30:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.391 18:30:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.391 18:30:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.391 18:30:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.391 ************************************ 00:04:47.391 START TEST event_scheduler 00:04:47.391 ************************************ 00:04:47.391 18:30:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.391 * Looking for test storage... 00:04:47.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:47.391 18:30:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.391 18:30:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=915394 00:04:47.391 18:30:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.391 18:30:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.391 18:30:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 915394 00:04:47.391 18:30:03 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 915394 ']' 00:04:47.391 18:30:03 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.391 18:30:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.391 18:30:03 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.391 18:30:03 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.391 18:30:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.391 [2024-07-15 18:30:03.994918] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:47.391 [2024-07-15 18:30:03.994966] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915394 ] 00:04:47.391 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.391 [2024-07-15 18:30:04.047223] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.650 [2024-07-15 18:30:04.120961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.650 [2024-07-15 18:30:04.121047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.650 [2024-07-15 18:30:04.121134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.650 [2024-07-15 18:30:04.121136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:48.218 18:30:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.218 [2024-07-15 18:30:04.823539] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:48.218 [2024-07-15 18:30:04.823558] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.218 [2024-07-15 18:30:04.823568] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.218 [2024-07-15 18:30:04.823573] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.218 [2024-07-15 18:30:04.823579] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.218 18:30:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.218 [2024-07-15 18:30:04.899758] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.218 18:30:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.218 18:30:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 ************************************ 00:04:48.477 START TEST scheduler_create_thread 00:04:48.477 ************************************ 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 2 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 3 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 4 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 5 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 6 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 7 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 8 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 9 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 10 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.477 18:30:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.852 18:30:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.852 18:30:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:49.852 18:30:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:49.852 18:30:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.852 18:30:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.229 18:30:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.229 00:04:51.229 real 0m2.621s 00:04:51.229 user 0m0.025s 00:04:51.229 sys 0m0.003s 00:04:51.229 18:30:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.230 18:30:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.230 ************************************ 00:04:51.230 END TEST scheduler_create_thread 00:04:51.230 ************************************ 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:51.230 18:30:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:51.230 18:30:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 915394 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 915394 ']' 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 915394 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 915394 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 915394' 00:04:51.230 killing process with pid 915394 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 915394 00:04:51.230 18:30:07 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 915394 00:04:51.487 [2024-07-15 18:30:08.033816] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:51.746 00:04:51.746 real 0m4.355s 00:04:51.746 user 0m8.327s 00:04:51.746 sys 0m0.338s 00:04:51.746 18:30:08 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.746 18:30:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 ************************************ 00:04:51.746 END TEST event_scheduler 00:04:51.746 ************************************ 00:04:51.746 18:30:08 event -- common/autotest_common.sh@1142 -- # return 0 00:04:51.746 18:30:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:51.746 18:30:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:51.746 18:30:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.746 18:30:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.746 18:30:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 ************************************ 00:04:51.746 START TEST app_repeat 00:04:51.746 ************************************ 00:04:51.746 18:30:08 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=916211 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 916211' 00:04:51.746 Process app_repeat pid: 916211 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:51.746 spdk_app_start Round 0 00:04:51.746 18:30:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 916211 /var/tmp/spdk-nbd.sock 00:04:51.746 18:30:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 916211 ']' 00:04:51.746 18:30:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.746 18:30:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.746 18:30:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.746 18:30:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.746 18:30:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.746 [2024-07-15 18:30:08.333798] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:04:51.746 [2024-07-15 18:30:08.333848] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916211 ] 00:04:51.746 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.746 [2024-07-15 18:30:08.389214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.004 [2024-07-15 18:30:08.469460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.004 [2024-07-15 18:30:08.469463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.571 18:30:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.571 18:30:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:52.571 18:30:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.830 Malloc0 00:04:52.830 18:30:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.830 Malloc1 00:04:53.090 18:30:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.090 /dev/nbd0 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.090 1+0 records in 00:04:53.090 1+0 records out 00:04:53.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180396 s, 22.7 MB/s 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.090 18:30:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.090 18:30:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.349 /dev/nbd1 00:04:53.349 18:30:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.349 18:30:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.349 1+0 records in 00:04:53.349 1+0 records out 00:04:53.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184633 s, 22.2 MB/s 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.349 18:30:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.349 18:30:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.349 18:30:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.349 18:30:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.349 18:30:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.349 18:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.607 { 00:04:53.607 "nbd_device": "/dev/nbd0", 00:04:53.607 "bdev_name": "Malloc0" 00:04:53.607 }, 00:04:53.607 { 00:04:53.607 "nbd_device": "/dev/nbd1", 00:04:53.607 "bdev_name": "Malloc1" 00:04:53.607 } 00:04:53.607 ]' 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.607 { 00:04:53.607 "nbd_device": "/dev/nbd0", 00:04:53.607 "bdev_name": "Malloc0" 00:04:53.607 }, 00:04:53.607 { 00:04:53.607 "nbd_device": "/dev/nbd1", 00:04:53.607 "bdev_name": "Malloc1" 00:04:53.607 } 00:04:53.607 ]' 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.607 /dev/nbd1' 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.607 /dev/nbd1' 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.607 256+0 records in 00:04:53.607 256+0 records out 00:04:53.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103503 s, 101 MB/s 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.607 256+0 records in 00:04:53.607 256+0 records out 00:04:53.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130723 s, 80.2 MB/s 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.607 256+0 records in 00:04:53.607 256+0 records out 00:04:53.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141389 s, 74.2 MB/s 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.607 18:30:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.608 18:30:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.866 18:30:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.125 18:30:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.444 18:30:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.444 18:30:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.444 18:30:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.702 [2024-07-15 18:30:11.234995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.702 [2024-07-15 18:30:11.302233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.702 [2024-07-15 18:30:11.302234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.702 [2024-07-15 18:30:11.342770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.702 [2024-07-15 18:30:11.342809] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.988 18:30:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.988 18:30:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:57.988 spdk_app_start Round 1 00:04:57.988 18:30:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 916211 /var/tmp/spdk-nbd.sock 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 916211 ']' 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.988 18:30:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:57.988 18:30:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.988 Malloc0 00:04:57.988 18:30:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.988 Malloc1 00:04:57.988 18:30:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.988 18:30:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.248 /dev/nbd0 00:04:58.248 18:30:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.248 18:30:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.248 1+0 records in 00:04:58.248 1+0 records out 00:04:58.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180387 s, 22.7 MB/s 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:58.248 18:30:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:58.248 18:30:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.248 18:30:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.248 18:30:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.507 /dev/nbd1 00:04:58.507 18:30:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.507 18:30:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.507 1+0 records in 00:04:58.507 1+0 records out 00:04:58.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199037 s, 20.6 MB/s 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:58.507 18:30:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.507 18:30:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:58.507 18:30:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.507 { 00:04:58.507 "nbd_device": "/dev/nbd0", 00:04:58.507 "bdev_name": "Malloc0" 00:04:58.507 }, 00:04:58.507 { 00:04:58.507 "nbd_device": "/dev/nbd1", 00:04:58.507 "bdev_name": "Malloc1" 00:04:58.507 } 00:04:58.507 ]' 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.507 { 00:04:58.507 "nbd_device": "/dev/nbd0", 00:04:58.507 "bdev_name": "Malloc0" 00:04:58.507 }, 00:04:58.507 { 00:04:58.507 "nbd_device": "/dev/nbd1", 00:04:58.507 "bdev_name": "Malloc1" 00:04:58.507 } 00:04:58.507 ]' 00:04:58.507 18:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.767 /dev/nbd1' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.767 /dev/nbd1' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.767 256+0 records in 00:04:58.767 256+0 records out 00:04:58.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494251 s, 212 MB/s 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.767 256+0 records in 00:04:58.767 256+0 records out 00:04:58.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135853 s, 77.2 MB/s 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.767 256+0 records in 00:04:58.767 256+0 records out 00:04:58.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149119 s, 70.3 MB/s 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.767 18:30:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.027 18:30:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.287 18:30:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.287 18:30:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.546 18:30:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.805 [2024-07-15 18:30:16.275959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.805 [2024-07-15 18:30:16.343664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.805 [2024-07-15 18:30:16.343667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.805 [2024-07-15 18:30:16.385019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.806 [2024-07-15 18:30:16.385059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.097 18:30:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.097 18:30:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:03.097 spdk_app_start Round 2 00:05:03.097 18:30:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 916211 /var/tmp/spdk-nbd.sock 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 916211 ']' 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.097 18:30:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:03.097 18:30:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.097 Malloc0 00:05:03.097 18:30:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.097 Malloc1 00:05:03.097 18:30:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.097 18:30:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.356 /dev/nbd0 00:05:03.356 18:30:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.356 18:30:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:03.356 18:30:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.356 1+0 records in 00:05:03.356 1+0 records out 00:05:03.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182762 s, 22.4 MB/s 00:05:03.357 18:30:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.357 18:30:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:03.357 18:30:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.357 18:30:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:03.357 18:30:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:03.357 18:30:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.357 18:30:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.357 18:30:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.357 /dev/nbd1 00:05:03.357 18:30:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.357 18:30:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.357 1+0 records in 00:05:03.357 1+0 records out 00:05:03.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180405 s, 22.7 MB/s 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:03.357 18:30:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.616 18:30:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:03.616 18:30:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.616 { 00:05:03.616 "nbd_device": "/dev/nbd0", 00:05:03.616 "bdev_name": "Malloc0" 00:05:03.616 }, 00:05:03.616 { 00:05:03.616 "nbd_device": "/dev/nbd1", 00:05:03.616 "bdev_name": "Malloc1" 00:05:03.616 } 00:05:03.616 ]' 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.616 { 00:05:03.616 "nbd_device": "/dev/nbd0", 00:05:03.616 "bdev_name": "Malloc0" 00:05:03.616 }, 00:05:03.616 { 00:05:03.616 "nbd_device": "/dev/nbd1", 00:05:03.616 "bdev_name": "Malloc1" 00:05:03.616 } 00:05:03.616 ]' 00:05:03.616 18:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.617 /dev/nbd1' 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.617 /dev/nbd1' 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.617 256+0 records in 00:05:03.617 256+0 records out 00:05:03.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103588 s, 101 MB/s 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.617 18:30:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.876 256+0 records in 00:05:03.876 256+0 records out 00:05:03.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137911 s, 76.0 MB/s 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.876 256+0 records in 00:05:03.876 256+0 records out 00:05:03.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147319 s, 71.2 MB/s 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.876 18:30:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.136 18:30:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.395 18:30:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.395 18:30:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.655 18:30:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.948 [2024-07-15 18:30:21.363674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.948 [2024-07-15 18:30:21.437371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.948 [2024-07-15 18:30:21.437375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.948 [2024-07-15 18:30:21.478300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.948 [2024-07-15 18:30:21.478339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.483 18:30:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 916211 /var/tmp/spdk-nbd.sock 00:05:07.483 18:30:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 916211 ']' 00:05:07.483 18:30:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.483 18:30:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.483 18:30:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.483 18:30:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.483 18:30:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.742 18:30:24 event.app_repeat -- event/event.sh@39 -- # killprocess 916211 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 916211 ']' 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 916211 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 916211 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 916211' 00:05:07.742 killing process with pid 916211 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 916211 00:05:07.742 18:30:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 916211 00:05:08.002 spdk_app_start is called in Round 0. 00:05:08.002 Shutdown signal received, stop current app iteration 00:05:08.002 Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 reinitialization... 00:05:08.002 spdk_app_start is called in Round 1. 00:05:08.002 Shutdown signal received, stop current app iteration 00:05:08.002 Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 reinitialization... 00:05:08.002 spdk_app_start is called in Round 2. 00:05:08.002 Shutdown signal received, stop current app iteration 00:05:08.002 Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 reinitialization... 00:05:08.002 spdk_app_start is called in Round 3. 00:05:08.002 Shutdown signal received, stop current app iteration 00:05:08.002 18:30:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:08.002 18:30:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:08.002 00:05:08.002 real 0m16.259s 00:05:08.002 user 0m35.250s 00:05:08.002 sys 0m2.345s 00:05:08.002 18:30:24 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.002 18:30:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.002 ************************************ 00:05:08.002 END TEST app_repeat 00:05:08.002 ************************************ 00:05:08.002 18:30:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:08.002 18:30:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:08.002 18:30:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:08.002 18:30:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.002 18:30:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.002 18:30:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.002 ************************************ 00:05:08.002 START TEST cpu_locks 00:05:08.002 ************************************ 00:05:08.002 18:30:24 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:08.002 * Looking for test storage... 00:05:08.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:08.262 18:30:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:08.262 18:30:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:08.262 18:30:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:08.262 18:30:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:08.262 18:30:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.262 18:30:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.262 18:30:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.262 ************************************ 00:05:08.262 START TEST default_locks 00:05:08.262 ************************************ 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=919521 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 919521 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 919521 ']' 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.262 18:30:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.262 [2024-07-15 18:30:24.792112] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:08.262 [2024-07-15 18:30:24.792161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919521 ] 00:05:08.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.262 [2024-07-15 18:30:24.844713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.262 [2024-07-15 18:30:24.923515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.199 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.199 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:09.199 18:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 919521 00:05:09.199 18:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 919521 00:05:09.199 18:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.459 lslocks: write error 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 919521 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 919521 ']' 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 919521 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 919521 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 919521' 00:05:09.459 killing process with pid 919521 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 919521 00:05:09.459 18:30:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 919521 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 919521 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 919521 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 919521 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 919521 ']' 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (919521) - No such process 00:05:09.720 ERROR: process (pid: 919521) is no longer running 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.720 00:05:09.720 real 0m1.547s 00:05:09.720 user 0m1.624s 00:05:09.720 sys 0m0.511s 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.720 18:30:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.720 ************************************ 00:05:09.720 END TEST default_locks 00:05:09.720 ************************************ 00:05:09.720 18:30:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:09.720 18:30:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:09.720 18:30:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.720 18:30:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.720 18:30:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.720 ************************************ 00:05:09.720 START TEST default_locks_via_rpc 00:05:09.720 ************************************ 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=919782 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 919782 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 919782 ']' 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.720 18:30:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.720 [2024-07-15 18:30:26.405394] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:09.720 [2024-07-15 18:30:26.405436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919782 ] 00:05:09.980 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.980 [2024-07-15 18:30:26.460746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.980 [2024-07-15 18:30:26.529703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 919782 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 919782 00:05:10.547 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 919782 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 919782 ']' 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 919782 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 919782 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 919782' 00:05:10.806 killing process with pid 919782 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 919782 00:05:10.806 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 919782 00:05:11.066 00:05:11.066 real 0m1.344s 00:05:11.066 user 0m1.412s 00:05:11.066 sys 0m0.405s 00:05:11.066 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.066 18:30:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.066 ************************************ 00:05:11.066 END TEST default_locks_via_rpc 00:05:11.066 ************************************ 00:05:11.066 18:30:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:11.066 18:30:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:11.066 18:30:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.066 18:30:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.066 18:30:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.066 ************************************ 00:05:11.066 START TEST non_locking_app_on_locked_coremask 00:05:11.066 ************************************ 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=920046 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 920046 /var/tmp/spdk.sock 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 920046 ']' 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.066 18:30:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.325 [2024-07-15 18:30:27.816679] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:11.325 [2024-07-15 18:30:27.816719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920046 ] 00:05:11.325 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.325 [2024-07-15 18:30:27.869389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.325 [2024-07-15 18:30:27.948796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=920276 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 920276 /var/tmp/spdk2.sock 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 920276 ']' 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.264 18:30:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.264 [2024-07-15 18:30:28.661259] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:12.264 [2024-07-15 18:30:28.661310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920276 ] 00:05:12.264 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.264 [2024-07-15 18:30:28.731637] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.264 [2024-07-15 18:30:28.731658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.264 [2024-07-15 18:30:28.880629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.832 18:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.832 18:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.832 18:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 920046 00:05:12.832 18:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 920046 00:05:12.832 18:30:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.402 lslocks: write error 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 920046 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 920046 ']' 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 920046 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 920046 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 920046' 00:05:13.402 killing process with pid 920046 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 920046 00:05:13.402 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 920046 00:05:13.970 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 920276 00:05:13.970 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 920276 ']' 00:05:13.970 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 920276 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 920276 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 920276' 00:05:14.230 killing process with pid 920276 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 920276 00:05:14.230 18:30:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 920276 00:05:14.489 00:05:14.489 real 0m3.265s 00:05:14.490 user 0m3.493s 00:05:14.490 sys 0m0.946s 00:05:14.490 18:30:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.490 18:30:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.490 ************************************ 00:05:14.490 END TEST non_locking_app_on_locked_coremask 00:05:14.490 ************************************ 00:05:14.490 18:30:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:14.490 18:30:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:14.490 18:30:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.490 18:30:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.490 18:30:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.490 ************************************ 00:05:14.490 START TEST locking_app_on_unlocked_coremask 00:05:14.490 ************************************ 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=920764 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 920764 /var/tmp/spdk.sock 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 920764 ']' 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.490 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.490 [2024-07-15 18:30:31.131934] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:14.490 [2024-07-15 18:30:31.131970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920764 ] 00:05:14.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.490 [2024-07-15 18:30:31.183781] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.490 [2024-07-15 18:30:31.183803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.770 [2024-07-15 18:30:31.262854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=920783 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 920783 /var/tmp/spdk2.sock 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 920783 ']' 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.337 18:30:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.337 [2024-07-15 18:30:31.991076] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:15.337 [2024-07-15 18:30:31.991127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920783 ] 00:05:15.337 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.595 [2024-07-15 18:30:32.061923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.595 [2024-07-15 18:30:32.205870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.163 18:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.163 18:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:16.163 18:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 920783 00:05:16.163 18:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 920783 00:05:16.163 18:30:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.728 lslocks: write error 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 920764 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 920764 ']' 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 920764 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 920764 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 920764' 00:05:16.728 killing process with pid 920764 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 920764 00:05:16.728 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 920764 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 920783 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 920783 ']' 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 920783 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 920783 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 920783' 00:05:17.295 killing process with pid 920783 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 920783 00:05:17.295 18:30:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 920783 00:05:17.554 00:05:17.554 real 0m3.157s 00:05:17.554 user 0m3.400s 00:05:17.554 sys 0m0.906s 00:05:17.554 18:30:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.554 18:30:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.554 ************************************ 00:05:17.554 END TEST locking_app_on_unlocked_coremask 00:05:17.554 ************************************ 00:05:17.812 18:30:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:17.812 18:30:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:17.812 18:30:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.812 18:30:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.812 18:30:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.812 ************************************ 00:05:17.812 START TEST locking_app_on_locked_coremask 00:05:17.812 ************************************ 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=921270 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 921270 /var/tmp/spdk.sock 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 921270 ']' 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.812 18:30:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.812 [2024-07-15 18:30:34.367447] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:17.812 [2024-07-15 18:30:34.367485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921270 ] 00:05:17.812 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.812 [2024-07-15 18:30:34.419574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.812 [2024-07-15 18:30:34.498962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=921488 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 921488 /var/tmp/spdk2.sock 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 921488 /var/tmp/spdk2.sock 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 921488 /var/tmp/spdk2.sock 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 921488 ']' 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.748 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.748 [2024-07-15 18:30:35.215199] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:18.748 [2024-07-15 18:30:35.215265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921488 ] 00:05:18.748 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.748 [2024-07-15 18:30:35.292845] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 921270 has claimed it. 00:05:18.748 [2024-07-15 18:30:35.292878] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (921488) - No such process 00:05:19.316 ERROR: process (pid: 921488) is no longer running 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 921270 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 921270 00:05:19.316 18:30:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.575 lslocks: write error 00:05:19.575 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 921270 00:05:19.575 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 921270 ']' 00:05:19.575 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 921270 00:05:19.575 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:19.575 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.575 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 921270 00:05:19.834 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.834 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.834 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 921270' 00:05:19.834 killing process with pid 921270 00:05:19.834 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 921270 00:05:19.834 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 921270 00:05:20.094 00:05:20.094 real 0m2.284s 00:05:20.094 user 0m2.523s 00:05:20.094 sys 0m0.620s 00:05:20.094 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.094 18:30:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.094 ************************************ 00:05:20.094 END TEST locking_app_on_locked_coremask 00:05:20.094 ************************************ 00:05:20.094 18:30:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.094 18:30:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.094 18:30:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.094 18:30:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.094 18:30:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.094 ************************************ 00:05:20.094 START TEST locking_overlapped_coremask 00:05:20.094 ************************************ 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=921764 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 921764 /var/tmp/spdk.sock 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 921764 ']' 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.094 18:30:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.094 [2024-07-15 18:30:36.710401] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:20.094 [2024-07-15 18:30:36.710437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921764 ] 00:05:20.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.094 [2024-07-15 18:30:36.763203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.353 [2024-07-15 18:30:36.843386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.353 [2024-07-15 18:30:36.843483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.353 [2024-07-15 18:30:36.843485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=921777 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 921777 /var/tmp/spdk2.sock 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 921777 /var/tmp/spdk2.sock 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 921777 /var/tmp/spdk2.sock 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 921777 ']' 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.953 18:30:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.953 [2024-07-15 18:30:37.575423] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:20.953 [2024-07-15 18:30:37.575471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921777 ] 00:05:20.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.953 [2024-07-15 18:30:37.652641] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 921764 has claimed it. 00:05:20.953 [2024-07-15 18:30:37.652676] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (921777) - No such process 00:05:21.522 ERROR: process (pid: 921777) is no longer running 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 921764 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 921764 ']' 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 921764 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.522 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 921764 00:05:21.781 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.781 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.781 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 921764' 00:05:21.781 killing process with pid 921764 00:05:21.781 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 921764 00:05:21.782 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 921764 00:05:22.041 00:05:22.041 real 0m1.896s 00:05:22.041 user 0m5.402s 00:05:22.041 sys 0m0.390s 00:05:22.041 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.041 18:30:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.041 ************************************ 00:05:22.041 END TEST locking_overlapped_coremask 00:05:22.041 ************************************ 00:05:22.041 18:30:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:22.042 18:30:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.042 18:30:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.042 18:30:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.042 18:30:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.042 ************************************ 00:05:22.042 START TEST locking_overlapped_coremask_via_rpc 00:05:22.042 ************************************ 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=922041 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 922041 /var/tmp/spdk.sock 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 922041 ']' 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.042 18:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.042 [2024-07-15 18:30:38.686344] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:22.042 [2024-07-15 18:30:38.686396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid922041 ] 00:05:22.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.042 [2024-07-15 18:30:38.741864] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.042 [2024-07-15 18:30:38.741888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.301 [2024-07-15 18:30:38.812319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.301 [2024-07-15 18:30:38.812413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.301 [2024-07-15 18:30:38.812415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=922267 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 922267 /var/tmp/spdk2.sock 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 922267 ']' 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.869 18:30:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.869 [2024-07-15 18:30:39.528356] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:22.869 [2024-07-15 18:30:39.528404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid922267 ] 00:05:22.869 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.128 [2024-07-15 18:30:39.605605] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.128 [2024-07-15 18:30:39.605630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.128 [2024-07-15 18:30:39.756781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.128 [2024-07-15 18:30:39.756899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.128 [2024-07-15 18:30:39.756899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.695 [2024-07-15 18:30:40.358301] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 922041 has claimed it. 00:05:23.695 request: 00:05:23.695 { 00:05:23.695 "method": "framework_enable_cpumask_locks", 00:05:23.695 "req_id": 1 00:05:23.695 } 00:05:23.695 Got JSON-RPC error response 00:05:23.695 response: 00:05:23.695 { 00:05:23.695 "code": -32603, 00:05:23.695 "message": "Failed to claim CPU core: 2" 00:05:23.695 } 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 922041 /var/tmp/spdk.sock 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 922041 ']' 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.695 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 922267 /var/tmp/spdk2.sock 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 922267 ']' 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.953 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.212 00:05:24.212 real 0m2.125s 00:05:24.212 user 0m0.888s 00:05:24.212 sys 0m0.162s 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.212 18:30:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.212 ************************************ 00:05:24.212 END TEST locking_overlapped_coremask_via_rpc 00:05:24.212 ************************************ 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.212 18:30:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:24.212 18:30:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 922041 ]] 00:05:24.212 18:30:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 922041 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 922041 ']' 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 922041 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 922041 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 922041' 00:05:24.212 killing process with pid 922041 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 922041 00:05:24.212 18:30:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 922041 00:05:24.471 18:30:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 922267 ]] 00:05:24.471 18:30:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 922267 00:05:24.471 18:30:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 922267 ']' 00:05:24.471 18:30:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 922267 00:05:24.471 18:30:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:24.471 18:30:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.471 18:30:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 922267 00:05:24.731 18:30:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:24.731 18:30:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:24.731 18:30:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 922267' 00:05:24.731 killing process with pid 922267 00:05:24.731 18:30:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 922267 00:05:24.731 18:30:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 922267 00:05:24.990 18:30:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:24.990 18:30:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:24.990 18:30:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 922041 ]] 00:05:24.990 18:30:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 922041 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 922041 ']' 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 922041 00:05:24.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (922041) - No such process 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 922041 is not found' 00:05:24.990 Process with pid 922041 is not found 00:05:24.990 18:30:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 922267 ]] 00:05:24.990 18:30:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 922267 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 922267 ']' 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 922267 00:05:24.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (922267) - No such process 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 922267 is not found' 00:05:24.990 Process with pid 922267 is not found 00:05:24.990 18:30:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:24.990 00:05:24.990 real 0m16.907s 00:05:24.990 user 0m29.466s 00:05:24.990 sys 0m4.814s 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.990 18:30:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.990 ************************************ 00:05:24.990 END TEST cpu_locks 00:05:24.990 ************************************ 00:05:24.990 18:30:41 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.990 00:05:24.990 real 0m41.672s 00:05:24.990 user 1m19.662s 00:05:24.990 sys 0m8.049s 00:05:24.990 18:30:41 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.990 18:30:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.990 ************************************ 00:05:24.990 END TEST event 00:05:24.990 ************************************ 00:05:24.990 18:30:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.990 18:30:41 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:24.990 18:30:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.990 18:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.990 18:30:41 -- common/autotest_common.sh@10 -- # set +x 00:05:24.990 ************************************ 00:05:24.990 START TEST thread 00:05:24.990 ************************************ 00:05:24.990 18:30:41 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:25.249 * Looking for test storage... 00:05:25.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:25.249 18:30:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.249 18:30:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:25.249 18:30:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.249 18:30:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.249 ************************************ 00:05:25.249 START TEST thread_poller_perf 00:05:25.249 ************************************ 00:05:25.249 18:30:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.249 [2024-07-15 18:30:41.751626] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:25.249 [2024-07-15 18:30:41.751676] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid922641 ] 00:05:25.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.249 [2024-07-15 18:30:41.800481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.249 [2024-07-15 18:30:41.874999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.249 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:26.625 ====================================== 00:05:26.625 busy:2307192970 (cyc) 00:05:26.625 total_run_count: 414000 00:05:26.625 tsc_hz: 2300000000 (cyc) 00:05:26.625 ====================================== 00:05:26.625 poller_cost: 5572 (cyc), 2422 (nsec) 00:05:26.625 00:05:26.625 real 0m1.209s 00:05:26.625 user 0m1.136s 00:05:26.625 sys 0m0.070s 00:05:26.625 18:30:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.625 18:30:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.625 ************************************ 00:05:26.625 END TEST thread_poller_perf 00:05:26.625 ************************************ 00:05:26.625 18:30:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:26.625 18:30:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:26.625 18:30:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:26.625 18:30:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.625 18:30:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.625 ************************************ 00:05:26.625 START TEST thread_poller_perf 00:05:26.625 ************************************ 00:05:26.625 18:30:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:26.626 [2024-07-15 18:30:43.009902] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:26.626 [2024-07-15 18:30:43.009947] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid922873 ] 00:05:26.626 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.626 [2024-07-15 18:30:43.062892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.626 [2024-07-15 18:30:43.135320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.626 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:27.560 ====================================== 00:05:27.560 busy:2301544834 (cyc) 00:05:27.560 total_run_count: 5270000 00:05:27.560 tsc_hz: 2300000000 (cyc) 00:05:27.560 ====================================== 00:05:27.560 poller_cost: 436 (cyc), 189 (nsec) 00:05:27.560 00:05:27.560 real 0m1.204s 00:05:27.560 user 0m1.138s 00:05:27.560 sys 0m0.062s 00:05:27.560 18:30:44 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.560 18:30:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.560 ************************************ 00:05:27.560 END TEST thread_poller_perf 00:05:27.560 ************************************ 00:05:27.560 18:30:44 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:27.560 18:30:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:27.560 00:05:27.560 real 0m2.601s 00:05:27.560 user 0m2.347s 00:05:27.560 sys 0m0.260s 00:05:27.560 18:30:44 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.560 18:30:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.560 ************************************ 00:05:27.560 END TEST thread 00:05:27.560 ************************************ 00:05:27.560 18:30:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.560 18:30:44 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:27.560 18:30:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.560 18:30:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.560 18:30:44 -- common/autotest_common.sh@10 -- # set +x 00:05:27.819 ************************************ 00:05:27.819 START TEST accel 00:05:27.819 ************************************ 00:05:27.819 18:30:44 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:27.819 * Looking for test storage... 00:05:27.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:27.819 18:30:44 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:27.819 18:30:44 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:27.819 18:30:44 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.819 18:30:44 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=923159 00:05:27.819 18:30:44 accel -- accel/accel.sh@63 -- # waitforlisten 923159 00:05:27.819 18:30:44 accel -- common/autotest_common.sh@829 -- # '[' -z 923159 ']' 00:05:27.819 18:30:44 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.819 18:30:44 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:27.819 18:30:44 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.819 18:30:44 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:27.819 18:30:44 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.819 18:30:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.819 18:30:44 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.819 18:30:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.819 18:30:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.819 18:30:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.819 18:30:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.819 18:30:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.819 18:30:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:27.819 18:30:44 accel -- accel/accel.sh@41 -- # jq -r . 00:05:27.819 [2024-07-15 18:30:44.434791] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:27.819 [2024-07-15 18:30:44.434842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923159 ] 00:05:27.819 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.819 [2024-07-15 18:30:44.489322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.078 [2024-07-15 18:30:44.562904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@862 -- # return 0 00:05:28.646 18:30:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:28.646 18:30:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:28.646 18:30:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:28.646 18:30:45 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:28.646 18:30:45 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:28.646 18:30:45 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.646 18:30:45 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:28.646 18:30:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:28.646 18:30:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:28.646 18:30:45 accel -- accel/accel.sh@75 -- # killprocess 923159 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@948 -- # '[' -z 923159 ']' 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@952 -- # kill -0 923159 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@953 -- # uname 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 923159 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 923159' 00:05:28.646 killing process with pid 923159 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@967 -- # kill 923159 00:05:28.646 18:30:45 accel -- common/autotest_common.sh@972 -- # wait 923159 00:05:29.215 18:30:45 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:29.215 18:30:45 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:29.215 18:30:45 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:29.215 18:30:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.215 18:30:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.215 18:30:45 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:29.215 18:30:45 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:29.215 18:30:45 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.215 18:30:45 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:29.215 18:30:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.215 18:30:45 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:29.215 18:30:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:29.215 18:30:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.215 18:30:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.215 ************************************ 00:05:29.215 START TEST accel_missing_filename 00:05:29.215 ************************************ 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.215 18:30:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:29.215 18:30:45 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:29.215 [2024-07-15 18:30:45.792349] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:29.215 [2024-07-15 18:30:45.792418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923434 ] 00:05:29.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.215 [2024-07-15 18:30:45.851154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.474 [2024-07-15 18:30:45.924476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.474 [2024-07-15 18:30:45.965516] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.474 [2024-07-15 18:30:46.025177] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:29.474 A filename is required. 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.474 00:05:29.474 real 0m0.335s 00:05:29.474 user 0m0.247s 00:05:29.474 sys 0m0.119s 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.474 18:30:46 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:29.474 ************************************ 00:05:29.474 END TEST accel_missing_filename 00:05:29.474 ************************************ 00:05:29.474 18:30:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.474 18:30:46 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.475 18:30:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:29.475 18:30:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.475 18:30:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.475 ************************************ 00:05:29.475 START TEST accel_compress_verify 00:05:29.475 ************************************ 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.475 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:29.475 18:30:46 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:29.734 [2024-07-15 18:30:46.190942] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:29.734 [2024-07-15 18:30:46.190992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923666 ] 00:05:29.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.734 [2024-07-15 18:30:46.245869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.734 [2024-07-15 18:30:46.318749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.734 [2024-07-15 18:30:46.359292] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.734 [2024-07-15 18:30:46.418770] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:29.993 00:05:29.993 Compression does not support the verify option, aborting. 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.993 00:05:29.993 real 0m0.328s 00:05:29.993 user 0m0.251s 00:05:29.993 sys 0m0.116s 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.993 18:30:46 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:29.993 ************************************ 00:05:29.993 END TEST accel_compress_verify 00:05:29.993 ************************************ 00:05:29.993 18:30:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.993 18:30:46 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:29.994 18:30:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:29.994 18:30:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.994 18:30:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.994 ************************************ 00:05:29.994 START TEST accel_wrong_workload 00:05:29.994 ************************************ 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:29.994 18:30:46 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:29.994 Unsupported workload type: foobar 00:05:29.994 [2024-07-15 18:30:46.584041] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:29.994 accel_perf options: 00:05:29.994 [-h help message] 00:05:29.994 [-q queue depth per core] 00:05:29.994 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:29.994 [-T number of threads per core 00:05:29.994 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:29.994 [-t time in seconds] 00:05:29.994 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:29.994 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:29.994 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:29.994 [-l for compress/decompress workloads, name of uncompressed input file 00:05:29.994 [-S for crc32c workload, use this seed value (default 0) 00:05:29.994 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:29.994 [-f for fill workload, use this BYTE value (default 255) 00:05:29.994 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:29.994 [-y verify result if this switch is on] 00:05:29.994 [-a tasks to allocate per core (default: same value as -q)] 00:05:29.994 Can be used to spread operations across a wider range of memory. 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.994 00:05:29.994 real 0m0.032s 00:05:29.994 user 0m0.022s 00:05:29.994 sys 0m0.010s 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.994 18:30:46 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:29.994 ************************************ 00:05:29.994 END TEST accel_wrong_workload 00:05:29.994 ************************************ 00:05:29.994 Error: writing output failed: Broken pipe 00:05:29.994 18:30:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.994 18:30:46 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:29.994 18:30:46 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:29.994 18:30:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.994 18:30:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.994 ************************************ 00:05:29.994 START TEST accel_negative_buffers 00:05:29.994 ************************************ 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:29.994 18:30:46 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:29.994 -x option must be non-negative. 00:05:29.994 [2024-07-15 18:30:46.673725] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:29.994 accel_perf options: 00:05:29.994 [-h help message] 00:05:29.994 [-q queue depth per core] 00:05:29.994 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:29.994 [-T number of threads per core 00:05:29.994 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:29.994 [-t time in seconds] 00:05:29.994 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:29.994 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:29.994 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:29.994 [-l for compress/decompress workloads, name of uncompressed input file 00:05:29.994 [-S for crc32c workload, use this seed value (default 0) 00:05:29.994 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:29.994 [-f for fill workload, use this BYTE value (default 255) 00:05:29.994 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:29.994 [-y verify result if this switch is on] 00:05:29.994 [-a tasks to allocate per core (default: same value as -q)] 00:05:29.994 Can be used to spread operations across a wider range of memory. 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.994 00:05:29.994 real 0m0.031s 00:05:29.994 user 0m0.018s 00:05:29.994 sys 0m0.013s 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.994 18:30:46 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:29.994 ************************************ 00:05:29.994 END TEST accel_negative_buffers 00:05:29.994 ************************************ 00:05:29.994 Error: writing output failed: Broken pipe 00:05:30.254 18:30:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.254 18:30:46 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:30.254 18:30:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:30.254 18:30:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.254 18:30:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.254 ************************************ 00:05:30.254 START TEST accel_crc32c 00:05:30.254 ************************************ 00:05:30.254 18:30:46 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:30.254 [2024-07-15 18:30:46.752097] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:30.254 [2024-07-15 18:30:46.752142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923729 ] 00:05:30.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.254 [2024-07-15 18:30:46.807333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.254 [2024-07-15 18:30:46.879442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.254 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.255 18:30:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:31.631 18:30:48 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.631 00:05:31.631 real 0m1.321s 00:05:31.631 user 0m1.225s 00:05:31.631 sys 0m0.110s 00:05:31.631 18:30:48 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.631 18:30:48 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:31.631 ************************************ 00:05:31.631 END TEST accel_crc32c 00:05:31.631 ************************************ 00:05:31.631 18:30:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.631 18:30:48 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:31.631 18:30:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:31.631 18:30:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.631 18:30:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.631 ************************************ 00:05:31.631 START TEST accel_crc32c_C2 00:05:31.631 ************************************ 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.631 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:31.631 [2024-07-15 18:30:48.152412] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:31.631 [2024-07-15 18:30:48.152461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923986 ] 00:05:31.632 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.632 [2024-07-15 18:30:48.207717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.632 [2024-07-15 18:30:48.280342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.632 18:30:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.007 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.008 00:05:33.008 real 0m1.334s 00:05:33.008 user 0m1.234s 00:05:33.008 sys 0m0.113s 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.008 18:30:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:33.008 ************************************ 00:05:33.008 END TEST accel_crc32c_C2 00:05:33.008 ************************************ 00:05:33.008 18:30:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.008 18:30:49 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:33.008 18:30:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.008 18:30:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.008 18:30:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.008 ************************************ 00:05:33.008 START TEST accel_copy 00:05:33.008 ************************************ 00:05:33.008 18:30:49 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:33.008 18:30:49 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:33.008 [2024-07-15 18:30:49.550393] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:33.008 [2024-07-15 18:30:49.550448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924231 ] 00:05:33.008 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.008 [2024-07-15 18:30:49.606677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.008 [2024-07-15 18:30:49.680583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:33.266 18:30:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.202 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:34.203 18:30:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.203 00:05:34.203 real 0m1.337s 00:05:34.203 user 0m1.229s 00:05:34.203 sys 0m0.121s 00:05:34.203 18:30:50 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.203 18:30:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:34.203 ************************************ 00:05:34.203 END TEST accel_copy 00:05:34.203 ************************************ 00:05:34.203 18:30:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.203 18:30:50 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.203 18:30:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:34.203 18:30:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.203 18:30:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.461 ************************************ 00:05:34.461 START TEST accel_fill 00:05:34.461 ************************************ 00:05:34.461 18:30:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:34.461 18:30:50 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:34.461 [2024-07-15 18:30:50.948896] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:34.461 [2024-07-15 18:30:50.948949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924485 ] 00:05:34.461 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.461 [2024-07-15 18:30:51.004293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.461 [2024-07-15 18:30:51.076759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.461 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:34.462 18:30:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:35.842 18:30:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.842 00:05:35.842 real 0m1.331s 00:05:35.842 user 0m1.239s 00:05:35.842 sys 0m0.108s 00:05:35.842 18:30:52 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.842 18:30:52 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:35.842 ************************************ 00:05:35.842 END TEST accel_fill 00:05:35.842 ************************************ 00:05:35.842 18:30:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.842 18:30:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:35.842 18:30:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:35.842 18:30:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.842 18:30:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.842 ************************************ 00:05:35.842 START TEST accel_copy_crc32c 00:05:35.842 ************************************ 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:35.842 [2024-07-15 18:30:52.349444] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:35.842 [2024-07-15 18:30:52.349490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924733 ] 00:05:35.842 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.842 [2024-07-15 18:30:52.403425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.842 [2024-07-15 18:30:52.475247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.842 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.843 18:30:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.285 00:05:37.285 real 0m1.332s 00:05:37.285 user 0m1.230s 00:05:37.285 sys 0m0.116s 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.285 18:30:53 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:37.285 ************************************ 00:05:37.285 END TEST accel_copy_crc32c 00:05:37.285 ************************************ 00:05:37.285 18:30:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.285 18:30:53 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:37.285 18:30:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:37.285 18:30:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.285 18:30:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.285 ************************************ 00:05:37.285 START TEST accel_copy_crc32c_C2 00:05:37.285 ************************************ 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:37.285 [2024-07-15 18:30:53.745638] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:37.285 [2024-07-15 18:30:53.745687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924982 ] 00:05:37.285 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.285 [2024-07-15 18:30:53.799306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.285 [2024-07-15 18:30:53.871049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.285 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.286 18:30:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.666 00:05:38.666 real 0m1.331s 00:05:38.666 user 0m1.241s 00:05:38.666 sys 0m0.103s 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.666 18:30:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:38.666 ************************************ 00:05:38.666 END TEST accel_copy_crc32c_C2 00:05:38.666 ************************************ 00:05:38.666 18:30:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.666 18:30:55 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:38.666 18:30:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:38.666 18:30:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.666 18:30:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.666 ************************************ 00:05:38.666 START TEST accel_dualcast 00:05:38.666 ************************************ 00:05:38.666 18:30:55 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:38.666 [2024-07-15 18:30:55.145654] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:38.666 [2024-07-15 18:30:55.145719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925233 ] 00:05:38.666 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.666 [2024-07-15 18:30:55.202770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.666 [2024-07-15 18:30:55.276095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.666 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 18:30:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:40.048 18:30:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.048 00:05:40.048 real 0m1.337s 00:05:40.048 user 0m1.231s 00:05:40.048 sys 0m0.118s 00:05:40.048 18:30:56 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.048 18:30:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:40.048 ************************************ 00:05:40.048 END TEST accel_dualcast 00:05:40.048 ************************************ 00:05:40.048 18:30:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.048 18:30:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:40.048 18:30:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.048 18:30:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.048 18:30:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.048 ************************************ 00:05:40.048 START TEST accel_compare 00:05:40.048 ************************************ 00:05:40.048 18:30:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:40.048 [2024-07-15 18:30:56.547756] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:40.048 [2024-07-15 18:30:56.547821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925482 ] 00:05:40.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.048 [2024-07-15 18:30:56.603900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.048 [2024-07-15 18:30:56.676157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:40.048 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:40.049 18:30:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:41.427 18:30:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.427 00:05:41.427 real 0m1.334s 00:05:41.427 user 0m1.236s 00:05:41.427 sys 0m0.111s 00:05:41.427 18:30:57 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.427 18:30:57 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:41.427 ************************************ 00:05:41.427 END TEST accel_compare 00:05:41.427 ************************************ 00:05:41.427 18:30:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.427 18:30:57 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:41.427 18:30:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.427 18:30:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.427 18:30:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.427 ************************************ 00:05:41.427 START TEST accel_xor 00:05:41.427 ************************************ 00:05:41.427 18:30:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:41.427 18:30:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:41.427 [2024-07-15 18:30:57.946937] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:41.427 [2024-07-15 18:30:57.946988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925733 ] 00:05:41.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.427 [2024-07-15 18:30:58.001727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.427 [2024-07-15 18:30:58.073836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 18:30:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.806 00:05:42.806 real 0m1.332s 00:05:42.806 user 0m1.238s 00:05:42.806 sys 0m0.107s 00:05:42.806 18:30:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.806 18:30:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:42.806 ************************************ 00:05:42.806 END TEST accel_xor 00:05:42.806 ************************************ 00:05:42.806 18:30:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.806 18:30:59 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:42.806 18:30:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:42.806 18:30:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.806 18:30:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.806 ************************************ 00:05:42.806 START TEST accel_xor 00:05:42.806 ************************************ 00:05:42.806 18:30:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:42.806 18:30:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:42.806 [2024-07-15 18:30:59.350912] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:42.806 [2024-07-15 18:30:59.350983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925984 ] 00:05:42.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.806 [2024-07-15 18:30:59.407375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.806 [2024-07-15 18:30:59.479282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.068 18:30:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:44.005 18:31:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.005 00:05:44.005 real 0m1.339s 00:05:44.005 user 0m1.239s 00:05:44.005 sys 0m0.113s 00:05:44.005 18:31:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.005 18:31:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:44.005 ************************************ 00:05:44.005 END TEST accel_xor 00:05:44.005 ************************************ 00:05:44.005 18:31:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.005 18:31:00 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:44.005 18:31:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:44.005 18:31:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.005 18:31:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.264 ************************************ 00:05:44.264 START TEST accel_dif_verify 00:05:44.264 ************************************ 00:05:44.264 18:31:00 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:44.264 18:31:00 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:44.265 [2024-07-15 18:31:00.756962] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:44.265 [2024-07-15 18:31:00.757008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926231 ] 00:05:44.265 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.265 [2024-07-15 18:31:00.812920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.265 [2024-07-15 18:31:00.886258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.265 18:31:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:45.643 18:31:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.643 00:05:45.643 real 0m1.337s 00:05:45.643 user 0m1.232s 00:05:45.643 sys 0m0.119s 00:05:45.643 18:31:02 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.643 18:31:02 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:45.643 ************************************ 00:05:45.643 END TEST accel_dif_verify 00:05:45.643 ************************************ 00:05:45.643 18:31:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.643 18:31:02 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:45.643 18:31:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:45.643 18:31:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.643 18:31:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.643 ************************************ 00:05:45.643 START TEST accel_dif_generate 00:05:45.643 ************************************ 00:05:45.643 18:31:02 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:45.643 18:31:02 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:45.643 [2024-07-15 18:31:02.159358] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:45.643 [2024-07-15 18:31:02.159421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926485 ] 00:05:45.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.644 [2024-07-15 18:31:02.215964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.644 [2024-07-15 18:31:02.288816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.644 18:31:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:47.022 18:31:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.022 00:05:47.022 real 0m1.337s 00:05:47.022 user 0m1.232s 00:05:47.022 sys 0m0.119s 00:05:47.022 18:31:03 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.022 18:31:03 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:47.022 ************************************ 00:05:47.022 END TEST accel_dif_generate 00:05:47.022 ************************************ 00:05:47.022 18:31:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.022 18:31:03 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:47.022 18:31:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:47.022 18:31:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.022 18:31:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.022 ************************************ 00:05:47.022 START TEST accel_dif_generate_copy 00:05:47.022 ************************************ 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:47.022 [2024-07-15 18:31:03.559341] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:47.022 [2024-07-15 18:31:03.559391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926732 ] 00:05:47.022 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.022 [2024-07-15 18:31:03.613200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.022 [2024-07-15 18:31:03.685365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.281 18:31:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.219 00:05:48.219 real 0m1.334s 00:05:48.219 user 0m1.231s 00:05:48.219 sys 0m0.116s 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.219 18:31:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:48.219 ************************************ 00:05:48.219 END TEST accel_dif_generate_copy 00:05:48.219 ************************************ 00:05:48.219 18:31:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.219 18:31:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:48.219 18:31:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.219 18:31:04 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:48.219 18:31:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.219 18:31:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.479 ************************************ 00:05:48.479 START TEST accel_comp 00:05:48.479 ************************************ 00:05:48.479 18:31:04 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:48.479 18:31:04 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:48.479 [2024-07-15 18:31:04.961867] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:48.479 [2024-07-15 18:31:04.961935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926977 ] 00:05:48.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.479 [2024-07-15 18:31:05.017671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.479 [2024-07-15 18:31:05.093909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.479 18:31:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:49.858 18:31:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.858 00:05:49.858 real 0m1.343s 00:05:49.858 user 0m1.242s 00:05:49.858 sys 0m0.117s 00:05:49.858 18:31:06 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.858 18:31:06 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:49.858 ************************************ 00:05:49.858 END TEST accel_comp 00:05:49.858 ************************************ 00:05:49.858 18:31:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.858 18:31:06 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.858 18:31:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:49.858 18:31:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.858 18:31:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.858 ************************************ 00:05:49.858 START TEST accel_decomp 00:05:49.858 ************************************ 00:05:49.858 18:31:06 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:49.858 [2024-07-15 18:31:06.366440] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:49.858 [2024-07-15 18:31:06.366495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927232 ] 00:05:49.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.858 [2024-07-15 18:31:06.422750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.858 [2024-07-15 18:31:06.496522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.858 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.859 18:31:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.234 18:31:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.234 00:05:51.234 real 0m1.335s 00:05:51.234 user 0m1.229s 00:05:51.234 sys 0m0.120s 00:05:51.234 18:31:07 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.234 18:31:07 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:51.234 ************************************ 00:05:51.234 END TEST accel_decomp 00:05:51.234 ************************************ 00:05:51.234 18:31:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.234 18:31:07 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.234 18:31:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:51.234 18:31:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.234 18:31:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.234 ************************************ 00:05:51.234 START TEST accel_decomp_full 00:05:51.234 ************************************ 00:05:51.234 18:31:07 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:51.234 18:31:07 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:51.234 [2024-07-15 18:31:07.772013] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:51.234 [2024-07-15 18:31:07.772081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927482 ] 00:05:51.234 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.234 [2024-07-15 18:31:07.828358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.234 [2024-07-15 18:31:07.901723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.493 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.493 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.493 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.493 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.493 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.493 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.494 18:31:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.428 18:31:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.428 00:05:52.428 real 0m1.345s 00:05:52.428 user 0m1.246s 00:05:52.428 sys 0m0.112s 00:05:52.428 18:31:09 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.428 18:31:09 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:52.428 ************************************ 00:05:52.428 END TEST accel_decomp_full 00:05:52.428 ************************************ 00:05:52.428 18:31:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.428 18:31:09 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.428 18:31:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:52.428 18:31:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.428 18:31:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.686 ************************************ 00:05:52.686 START TEST accel_decomp_mcore 00:05:52.686 ************************************ 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:52.686 [2024-07-15 18:31:09.180681] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:52.686 [2024-07-15 18:31:09.180734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927734 ] 00:05:52.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.686 [2024-07-15 18:31:09.235953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.686 [2024-07-15 18:31:09.311702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.686 [2024-07-15 18:31:09.311798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.686 [2024-07-15 18:31:09.311876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.686 [2024-07-15 18:31:09.311878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:52.686 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.687 18:31:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.125 00:05:54.125 real 0m1.348s 00:05:54.125 user 0m4.566s 00:05:54.125 sys 0m0.126s 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.125 18:31:10 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:54.125 ************************************ 00:05:54.125 END TEST accel_decomp_mcore 00:05:54.125 ************************************ 00:05:54.125 18:31:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.125 18:31:10 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.126 18:31:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:54.126 18:31:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.126 18:31:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.126 ************************************ 00:05:54.126 START TEST accel_decomp_full_mcore 00:05:54.126 ************************************ 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:54.126 [2024-07-15 18:31:10.595141] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:54.126 [2024-07-15 18:31:10.595191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927993 ] 00:05:54.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.126 [2024-07-15 18:31:10.650818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.126 [2024-07-15 18:31:10.727088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.126 [2024-07-15 18:31:10.727188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.126 [2024-07-15 18:31:10.727265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.126 [2024-07-15 18:31:10.727267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.126 18:31:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.506 00:05:55.506 real 0m1.363s 00:05:55.506 user 0m4.617s 00:05:55.506 sys 0m0.126s 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.506 18:31:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:55.506 ************************************ 00:05:55.506 END TEST accel_decomp_full_mcore 00:05:55.506 ************************************ 00:05:55.506 18:31:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.506 18:31:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.506 18:31:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:55.506 18:31:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.506 18:31:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.506 ************************************ 00:05:55.506 START TEST accel_decomp_mthread 00:05:55.506 ************************************ 00:05:55.506 18:31:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.506 18:31:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:55.506 18:31:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:55.506 18:31:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:55.506 [2024-07-15 18:31:12.025857] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:55.506 [2024-07-15 18:31:12.025921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928261 ] 00:05:55.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.506 [2024-07-15 18:31:12.082279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.506 [2024-07-15 18:31:12.156042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.506 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.766 18:31:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.704 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.705 00:05:56.705 real 0m1.345s 00:05:56.705 user 0m1.243s 00:05:56.705 sys 0m0.115s 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.705 18:31:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:56.705 ************************************ 00:05:56.705 END TEST accel_decomp_mthread 00:05:56.705 ************************************ 00:05:56.705 18:31:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.705 18:31:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.705 18:31:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:56.705 18:31:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.705 18:31:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.705 ************************************ 00:05:56.705 START TEST accel_decomp_full_mthread 00:05:56.705 ************************************ 00:05:56.705 18:31:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.705 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:56.705 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:56.964 [2024-07-15 18:31:13.435246] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:56.964 [2024-07-15 18:31:13.435294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928524 ] 00:05:56.964 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.964 [2024-07-15 18:31:13.489399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.964 [2024-07-15 18:31:13.562586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.964 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.965 18:31:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.344 00:05:58.344 real 0m1.367s 00:05:58.344 user 0m1.268s 00:05:58.344 sys 0m0.112s 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.344 18:31:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:58.344 ************************************ 00:05:58.344 END TEST accel_decomp_full_mthread 00:05:58.344 ************************************ 00:05:58.344 18:31:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.344 18:31:14 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:58.344 18:31:14 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.344 18:31:14 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:58.344 18:31:14 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:58.344 18:31:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.344 18:31:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.344 18:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.344 18:31:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.344 18:31:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.344 18:31:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.344 18:31:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.344 18:31:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:58.344 18:31:14 accel -- accel/accel.sh@41 -- # jq -r . 00:05:58.344 ************************************ 00:05:58.344 START TEST accel_dif_functional_tests 00:05:58.344 ************************************ 00:05:58.344 18:31:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.344 [2024-07-15 18:31:14.887641] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:58.344 [2024-07-15 18:31:14.887676] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928799 ] 00:05:58.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.344 [2024-07-15 18:31:14.940746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.344 [2024-07-15 18:31:15.015243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.344 [2024-07-15 18:31:15.015302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.344 [2024-07-15 18:31:15.015305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.604 00:05:58.604 00:05:58.604 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.604 http://cunit.sourceforge.net/ 00:05:58.604 00:05:58.604 00:05:58.604 Suite: accel_dif 00:05:58.604 Test: verify: DIF generated, GUARD check ...passed 00:05:58.604 Test: verify: DIF generated, APPTAG check ...passed 00:05:58.604 Test: verify: DIF generated, REFTAG check ...passed 00:05:58.604 Test: verify: DIF not generated, GUARD check ...[2024-07-15 18:31:15.084000] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:58.604 passed 00:05:58.604 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 18:31:15.084047] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:58.604 passed 00:05:58.604 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 18:31:15.084080] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:58.604 passed 00:05:58.604 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:58.604 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 18:31:15.084120] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:58.604 passed 00:05:58.604 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:58.604 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:58.604 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:58.604 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 18:31:15.084218] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:58.604 passed 00:05:58.604 Test: verify copy: DIF generated, GUARD check ...passed 00:05:58.604 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:58.604 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:58.604 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 18:31:15.084332] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:58.604 passed 00:05:58.604 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 18:31:15.084353] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:58.604 passed 00:05:58.604 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 18:31:15.084371] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:58.604 passed 00:05:58.604 Test: generate copy: DIF generated, GUARD check ...passed 00:05:58.604 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:58.604 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:58.604 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:58.604 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:58.604 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:58.604 Test: generate copy: iovecs-len validate ...[2024-07-15 18:31:15.084531] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:58.604 passed 00:05:58.604 Test: generate copy: buffer alignment validate ...passed 00:05:58.604 00:05:58.604 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.604 suites 1 1 n/a 0 0 00:05:58.604 tests 26 26 26 0 0 00:05:58.604 asserts 115 115 115 0 n/a 00:05:58.604 00:05:58.604 Elapsed time = 0.002 seconds 00:05:58.604 00:05:58.604 real 0m0.409s 00:05:58.604 user 0m0.627s 00:05:58.604 sys 0m0.139s 00:05:58.604 18:31:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.604 18:31:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:58.604 ************************************ 00:05:58.604 END TEST accel_dif_functional_tests 00:05:58.604 ************************************ 00:05:58.604 18:31:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.604 00:05:58.604 real 0m30.990s 00:05:58.604 user 0m34.826s 00:05:58.604 sys 0m4.230s 00:05:58.604 18:31:15 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.604 18:31:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.604 ************************************ 00:05:58.604 END TEST accel 00:05:58.604 ************************************ 00:05:58.864 18:31:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.864 18:31:15 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:58.864 18:31:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.864 18:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.864 18:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:58.864 ************************************ 00:05:58.864 START TEST accel_rpc 00:05:58.864 ************************************ 00:05:58.864 18:31:15 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:58.864 * Looking for test storage... 00:05:58.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:58.864 18:31:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.864 18:31:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=929025 00:05:58.864 18:31:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 929025 00:05:58.864 18:31:15 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 929025 ']' 00:05:58.864 18:31:15 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.864 18:31:15 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.864 18:31:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:58.864 18:31:15 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.864 18:31:15 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.864 18:31:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.864 [2024-07-15 18:31:15.483609] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:05:58.864 [2024-07-15 18:31:15.483655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929025 ] 00:05:58.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.864 [2024-07-15 18:31:15.538061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.124 [2024-07-15 18:31:15.612432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.693 18:31:16 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.693 18:31:16 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:59.693 18:31:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:59.693 18:31:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:59.693 18:31:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:59.693 18:31:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:59.693 18:31:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:59.693 18:31:16 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.693 18:31:16 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.693 18:31:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.693 ************************************ 00:05:59.693 START TEST accel_assign_opcode 00:05:59.693 ************************************ 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.693 [2024-07-15 18:31:16.318523] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.693 [2024-07-15 18:31:16.330549] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.693 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.952 software 00:05:59.952 00:05:59.952 real 0m0.241s 00:05:59.952 user 0m0.047s 00:05:59.952 sys 0m0.008s 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.952 18:31:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.952 ************************************ 00:05:59.952 END TEST accel_assign_opcode 00:05:59.952 ************************************ 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.952 18:31:16 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 929025 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 929025 ']' 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 929025 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 929025 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 929025' 00:05:59.952 killing process with pid 929025 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@967 -- # kill 929025 00:05:59.952 18:31:16 accel_rpc -- common/autotest_common.sh@972 -- # wait 929025 00:06:00.522 00:06:00.522 real 0m1.589s 00:06:00.522 user 0m1.683s 00:06:00.522 sys 0m0.398s 00:06:00.522 18:31:16 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.522 18:31:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.522 ************************************ 00:06:00.522 END TEST accel_rpc 00:06:00.522 ************************************ 00:06:00.522 18:31:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.522 18:31:16 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:00.522 18:31:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.522 18:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.522 18:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.522 ************************************ 00:06:00.522 START TEST app_cmdline 00:06:00.522 ************************************ 00:06:00.522 18:31:17 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:00.522 * Looking for test storage... 00:06:00.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:00.522 18:31:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:00.522 18:31:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=929325 00:06:00.522 18:31:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 929325 00:06:00.522 18:31:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:00.522 18:31:17 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 929325 ']' 00:06:00.522 18:31:17 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.522 18:31:17 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.522 18:31:17 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.522 18:31:17 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.522 18:31:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.522 [2024-07-15 18:31:17.150782] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:06:00.522 [2024-07-15 18:31:17.150828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929325 ] 00:06:00.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.522 [2024-07-15 18:31:17.206651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.781 [2024-07-15 18:31:17.286975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.349 18:31:17 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.349 18:31:17 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:01.349 18:31:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:01.608 { 00:06:01.608 "version": "SPDK v24.09-pre git sha1 abb6b4c21", 00:06:01.608 "fields": { 00:06:01.608 "major": 24, 00:06:01.608 "minor": 9, 00:06:01.608 "patch": 0, 00:06:01.608 "suffix": "-pre", 00:06:01.608 "commit": "abb6b4c21" 00:06:01.608 } 00:06:01.608 } 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:01.608 18:31:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:01.608 18:31:18 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.866 request: 00:06:01.866 { 00:06:01.866 "method": "env_dpdk_get_mem_stats", 00:06:01.866 "req_id": 1 00:06:01.866 } 00:06:01.866 Got JSON-RPC error response 00:06:01.866 response: 00:06:01.867 { 00:06:01.867 "code": -32601, 00:06:01.867 "message": "Method not found" 00:06:01.867 } 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.867 18:31:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 929325 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 929325 ']' 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 929325 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 929325 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 929325' 00:06:01.867 killing process with pid 929325 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@967 -- # kill 929325 00:06:01.867 18:31:18 app_cmdline -- common/autotest_common.sh@972 -- # wait 929325 00:06:02.126 00:06:02.126 real 0m1.679s 00:06:02.126 user 0m2.016s 00:06:02.126 sys 0m0.416s 00:06:02.126 18:31:18 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.126 18:31:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.126 ************************************ 00:06:02.126 END TEST app_cmdline 00:06:02.126 ************************************ 00:06:02.126 18:31:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.126 18:31:18 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:02.126 18:31:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.126 18:31:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.126 18:31:18 -- common/autotest_common.sh@10 -- # set +x 00:06:02.126 ************************************ 00:06:02.126 START TEST version 00:06:02.126 ************************************ 00:06:02.126 18:31:18 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:02.126 * Looking for test storage... 00:06:02.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:02.126 18:31:18 version -- app/version.sh@17 -- # get_header_version major 00:06:02.126 18:31:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # cut -f2 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.386 18:31:18 version -- app/version.sh@17 -- # major=24 00:06:02.386 18:31:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:02.386 18:31:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # cut -f2 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.386 18:31:18 version -- app/version.sh@18 -- # minor=9 00:06:02.386 18:31:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:02.386 18:31:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # cut -f2 00:06:02.386 18:31:18 version -- app/version.sh@19 -- # patch=0 00:06:02.386 18:31:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:02.386 18:31:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # cut -f2 00:06:02.386 18:31:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:02.386 18:31:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:02.386 18:31:18 version -- app/version.sh@22 -- # version=24.9 00:06:02.386 18:31:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:02.386 18:31:18 version -- app/version.sh@28 -- # version=24.9rc0 00:06:02.386 18:31:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:02.386 18:31:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:02.386 18:31:18 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:02.386 18:31:18 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:02.386 00:06:02.386 real 0m0.144s 00:06:02.386 user 0m0.072s 00:06:02.386 sys 0m0.108s 00:06:02.386 18:31:18 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.386 18:31:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:02.386 ************************************ 00:06:02.386 END TEST version 00:06:02.386 ************************************ 00:06:02.386 18:31:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.386 18:31:18 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@198 -- # uname -s 00:06:02.386 18:31:18 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:02.386 18:31:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:02.386 18:31:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:02.386 18:31:18 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:02.386 18:31:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.386 18:31:18 -- common/autotest_common.sh@10 -- # set +x 00:06:02.386 18:31:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:02.386 18:31:18 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:02.386 18:31:18 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:02.386 18:31:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:02.386 18:31:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.386 18:31:18 -- common/autotest_common.sh@10 -- # set +x 00:06:02.386 ************************************ 00:06:02.386 START TEST nvmf_tcp 00:06:02.386 ************************************ 00:06:02.386 18:31:18 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:02.386 * Looking for test storage... 00:06:02.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.386 18:31:19 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.387 18:31:19 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.647 18:31:19 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.647 18:31:19 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.647 18:31:19 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.647 18:31:19 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:02.647 18:31:19 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:02.647 18:31:19 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.647 18:31:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:02.647 18:31:19 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:02.647 18:31:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:02.647 18:31:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.647 18:31:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.647 ************************************ 00:06:02.647 START TEST nvmf_example 00:06:02.647 ************************************ 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:02.647 * Looking for test storage... 00:06:02.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:02.647 18:31:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:07.923 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:07.923 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.923 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:07.924 Found net devices under 0000:86:00.0: cvl_0_0 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:07.924 Found net devices under 0000:86:00.1: cvl_0_1 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:07.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:06:07.924 00:06:07.924 --- 10.0.0.2 ping statistics --- 00:06:07.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.924 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:06:07.924 00:06:07.924 --- 10.0.0.1 ping statistics --- 00:06:07.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.924 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=932743 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 932743 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 932743 ']' 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.924 18:31:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:07.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:08.860 18:31:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:08.860 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.113 Initializing NVMe Controllers 00:06:21.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:21.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:21.113 Initialization complete. Launching workers. 00:06:21.113 ======================================================== 00:06:21.113 Latency(us) 00:06:21.113 Device Information : IOPS MiB/s Average min max 00:06:21.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17959.19 70.15 3563.36 653.32 16197.91 00:06:21.113 ======================================================== 00:06:21.113 Total : 17959.19 70.15 3563.36 653.32 16197.91 00:06:21.113 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:21.113 rmmod nvme_tcp 00:06:21.113 rmmod nvme_fabrics 00:06:21.113 rmmod nvme_keyring 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 932743 ']' 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 932743 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 932743 ']' 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 932743 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 932743 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 932743' 00:06:21.113 killing process with pid 932743 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 932743 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 932743 00:06:21.113 nvmf threads initialize successfully 00:06:21.113 bdev subsystem init successfully 00:06:21.113 created a nvmf target service 00:06:21.113 create targets's poll groups done 00:06:21.113 all subsystems of target started 00:06:21.113 nvmf target is running 00:06:21.113 all subsystems of target stopped 00:06:21.113 destroy targets's poll groups done 00:06:21.113 destroyed the nvmf target service 00:06:21.113 bdev subsystem finish successfully 00:06:21.113 nvmf threads destroy successfully 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.113 18:31:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.372 18:31:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:21.372 18:31:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:21.372 18:31:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.372 18:31:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.372 00:06:21.372 real 0m18.880s 00:06:21.372 user 0m45.765s 00:06:21.372 sys 0m5.248s 00:06:21.372 18:31:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.372 18:31:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.372 ************************************ 00:06:21.372 END TEST nvmf_example 00:06:21.372 ************************************ 00:06:21.372 18:31:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:21.372 18:31:38 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:21.373 18:31:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.373 18:31:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.373 18:31:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.635 ************************************ 00:06:21.635 START TEST nvmf_filesystem 00:06:21.635 ************************************ 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:21.635 * Looking for test storage... 00:06:21.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:21.635 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:21.635 #define SPDK_CONFIG_H 00:06:21.635 #define SPDK_CONFIG_APPS 1 00:06:21.635 #define SPDK_CONFIG_ARCH native 00:06:21.635 #undef SPDK_CONFIG_ASAN 00:06:21.635 #undef SPDK_CONFIG_AVAHI 00:06:21.635 #undef SPDK_CONFIG_CET 00:06:21.635 #define SPDK_CONFIG_COVERAGE 1 00:06:21.635 #define SPDK_CONFIG_CROSS_PREFIX 00:06:21.635 #undef SPDK_CONFIG_CRYPTO 00:06:21.635 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:21.635 #undef SPDK_CONFIG_CUSTOMOCF 00:06:21.635 #undef SPDK_CONFIG_DAOS 00:06:21.635 #define SPDK_CONFIG_DAOS_DIR 00:06:21.635 #define SPDK_CONFIG_DEBUG 1 00:06:21.635 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:21.635 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:21.635 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:21.635 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:21.635 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:21.635 #undef SPDK_CONFIG_DPDK_UADK 00:06:21.635 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:21.635 #define SPDK_CONFIG_EXAMPLES 1 00:06:21.635 #undef SPDK_CONFIG_FC 00:06:21.635 #define SPDK_CONFIG_FC_PATH 00:06:21.635 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:21.635 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:21.635 #undef SPDK_CONFIG_FUSE 00:06:21.635 #undef SPDK_CONFIG_FUZZER 00:06:21.635 #define SPDK_CONFIG_FUZZER_LIB 00:06:21.635 #undef SPDK_CONFIG_GOLANG 00:06:21.635 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:21.635 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:21.635 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:21.635 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:21.635 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:21.635 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:21.635 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:21.635 #define SPDK_CONFIG_IDXD 1 00:06:21.635 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:21.635 #undef SPDK_CONFIG_IPSEC_MB 00:06:21.636 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:21.636 #define SPDK_CONFIG_ISAL 1 00:06:21.636 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:21.636 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:21.636 #define SPDK_CONFIG_LIBDIR 00:06:21.636 #undef SPDK_CONFIG_LTO 00:06:21.636 #define SPDK_CONFIG_MAX_LCORES 128 00:06:21.636 #define SPDK_CONFIG_NVME_CUSE 1 00:06:21.636 #undef SPDK_CONFIG_OCF 00:06:21.636 #define SPDK_CONFIG_OCF_PATH 00:06:21.636 #define SPDK_CONFIG_OPENSSL_PATH 00:06:21.636 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:21.636 #define SPDK_CONFIG_PGO_DIR 00:06:21.636 #undef SPDK_CONFIG_PGO_USE 00:06:21.636 #define SPDK_CONFIG_PREFIX /usr/local 00:06:21.636 #undef SPDK_CONFIG_RAID5F 00:06:21.636 #undef SPDK_CONFIG_RBD 00:06:21.636 #define SPDK_CONFIG_RDMA 1 00:06:21.636 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:21.636 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:21.636 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:21.636 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:21.636 #define SPDK_CONFIG_SHARED 1 00:06:21.636 #undef SPDK_CONFIG_SMA 00:06:21.636 #define SPDK_CONFIG_TESTS 1 00:06:21.636 #undef SPDK_CONFIG_TSAN 00:06:21.636 #define SPDK_CONFIG_UBLK 1 00:06:21.636 #define SPDK_CONFIG_UBSAN 1 00:06:21.636 #undef SPDK_CONFIG_UNIT_TESTS 00:06:21.636 #undef SPDK_CONFIG_URING 00:06:21.636 #define SPDK_CONFIG_URING_PATH 00:06:21.636 #undef SPDK_CONFIG_URING_ZNS 00:06:21.636 #undef SPDK_CONFIG_USDT 00:06:21.636 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:21.636 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:21.636 #define SPDK_CONFIG_VFIO_USER 1 00:06:21.636 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:21.636 #define SPDK_CONFIG_VHOST 1 00:06:21.636 #define SPDK_CONFIG_VIRTIO 1 00:06:21.636 #undef SPDK_CONFIG_VTUNE 00:06:21.636 #define SPDK_CONFIG_VTUNE_DIR 00:06:21.636 #define SPDK_CONFIG_WERROR 1 00:06:21.636 #define SPDK_CONFIG_WPDK_DIR 00:06:21.636 #undef SPDK_CONFIG_XNVME 00:06:21.636 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:21.636 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 935141 ]] 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 935141 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ccfVPE 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ccfVPE/tests/target /tmp/spdk.ccfVPE 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189586350080 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6387949568 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.637 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986400256 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=749568 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:21.638 * Looking for test storage... 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189586350080 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8602542080 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.638 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:21.898 18:31:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:27.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:27.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:27.174 Found net devices under 0000:86:00.0: cvl_0_0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:27.174 Found net devices under 0000:86:00.1: cvl_0_1 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:27.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:06:27.174 00:06:27.174 --- 10.0.0.2 ping statistics --- 00:06:27.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.174 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:06:27.174 00:06:27.174 --- 10.0.0.1 ping statistics --- 00:06:27.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.174 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 ************************************ 00:06:27.174 START TEST nvmf_filesystem_no_in_capsule 00:06:27.174 ************************************ 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=938160 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 938160 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 938160 ']' 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.174 18:31:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 [2024-07-15 18:31:43.782083] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:06:27.175 [2024-07-15 18:31:43.782129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.175 [2024-07-15 18:31:43.838575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.434 [2024-07-15 18:31:43.920828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.434 [2024-07-15 18:31:43.920867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.434 [2024-07-15 18:31:43.920875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.434 [2024-07-15 18:31:43.920882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.434 [2024-07-15 18:31:43.920887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.434 [2024-07-15 18:31:43.920927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.434 [2024-07-15 18:31:43.921022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.434 [2024-07-15 18:31:43.921106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.434 [2024-07-15 18:31:43.921107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.004 [2024-07-15 18:31:44.654262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.004 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.264 Malloc1 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.264 [2024-07-15 18:31:44.803393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.264 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:28.264 { 00:06:28.264 "name": "Malloc1", 00:06:28.264 "aliases": [ 00:06:28.264 "021336f6-c506-4f23-928d-f759d2a7bb7f" 00:06:28.264 ], 00:06:28.264 "product_name": "Malloc disk", 00:06:28.264 "block_size": 512, 00:06:28.264 "num_blocks": 1048576, 00:06:28.264 "uuid": "021336f6-c506-4f23-928d-f759d2a7bb7f", 00:06:28.264 "assigned_rate_limits": { 00:06:28.264 "rw_ios_per_sec": 0, 00:06:28.264 "rw_mbytes_per_sec": 0, 00:06:28.264 "r_mbytes_per_sec": 0, 00:06:28.264 "w_mbytes_per_sec": 0 00:06:28.264 }, 00:06:28.264 "claimed": true, 00:06:28.264 "claim_type": "exclusive_write", 00:06:28.264 "zoned": false, 00:06:28.264 "supported_io_types": { 00:06:28.264 "read": true, 00:06:28.264 "write": true, 00:06:28.264 "unmap": true, 00:06:28.264 "flush": true, 00:06:28.264 "reset": true, 00:06:28.264 "nvme_admin": false, 00:06:28.264 "nvme_io": false, 00:06:28.264 "nvme_io_md": false, 00:06:28.264 "write_zeroes": true, 00:06:28.264 "zcopy": true, 00:06:28.264 "get_zone_info": false, 00:06:28.264 "zone_management": false, 00:06:28.264 "zone_append": false, 00:06:28.264 "compare": false, 00:06:28.264 "compare_and_write": false, 00:06:28.264 "abort": true, 00:06:28.264 "seek_hole": false, 00:06:28.264 "seek_data": false, 00:06:28.264 "copy": true, 00:06:28.264 "nvme_iov_md": false 00:06:28.264 }, 00:06:28.264 "memory_domains": [ 00:06:28.264 { 00:06:28.265 "dma_device_id": "system", 00:06:28.265 "dma_device_type": 1 00:06:28.265 }, 00:06:28.265 { 00:06:28.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.265 "dma_device_type": 2 00:06:28.265 } 00:06:28.265 ], 00:06:28.265 "driver_specific": {} 00:06:28.265 } 00:06:28.265 ]' 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:28.265 18:31:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:29.641 18:31:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:29.641 18:31:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:29.641 18:31:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:29.641 18:31:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:29.641 18:31:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:31.542 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:31.800 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:32.059 18:31:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.464 ************************************ 00:06:33.464 START TEST filesystem_ext4 00:06:33.464 ************************************ 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:33.464 18:31:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:33.464 mke2fs 1.46.5 (30-Dec-2021) 00:06:33.464 Discarding device blocks: 0/522240 done 00:06:33.464 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:33.464 Filesystem UUID: ff0c5896-803b-4256-8135-3e6884e2adc2 00:06:33.464 Superblock backups stored on blocks: 00:06:33.464 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:33.464 00:06:33.464 Allocating group tables: 0/64 done 00:06:33.464 Writing inode tables: 0/64 done 00:06:33.724 Creating journal (8192 blocks): done 00:06:34.662 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:06:34.662 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 938160 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:34.662 00:06:34.662 real 0m1.563s 00:06:34.662 user 0m0.019s 00:06:34.662 sys 0m0.073s 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.662 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:34.662 ************************************ 00:06:34.662 END TEST filesystem_ext4 00:06:34.662 ************************************ 00:06:34.921 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:34.921 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:34.921 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:34.921 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.922 ************************************ 00:06:34.922 START TEST filesystem_btrfs 00:06:34.922 ************************************ 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:34.922 btrfs-progs v6.6.2 00:06:34.922 See https://btrfs.readthedocs.io for more information. 00:06:34.922 00:06:34.922 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:34.922 NOTE: several default settings have changed in version 5.15, please make sure 00:06:34.922 this does not affect your deployments: 00:06:34.922 - DUP for metadata (-m dup) 00:06:34.922 - enabled no-holes (-O no-holes) 00:06:34.922 - enabled free-space-tree (-R free-space-tree) 00:06:34.922 00:06:34.922 Label: (null) 00:06:34.922 UUID: e3c158d7-40a4-479d-8863-b81d054c970e 00:06:34.922 Node size: 16384 00:06:34.922 Sector size: 4096 00:06:34.922 Filesystem size: 510.00MiB 00:06:34.922 Block group profiles: 00:06:34.922 Data: single 8.00MiB 00:06:34.922 Metadata: DUP 32.00MiB 00:06:34.922 System: DUP 8.00MiB 00:06:34.922 SSD detected: yes 00:06:34.922 Zoned device: no 00:06:34.922 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:34.922 Runtime features: free-space-tree 00:06:34.922 Checksum: crc32c 00:06:34.922 Number of devices: 1 00:06:34.922 Devices: 00:06:34.922 ID SIZE PATH 00:06:34.922 1 510.00MiB /dev/nvme0n1p1 00:06:34.922 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:34.922 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:35.490 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:35.490 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:35.490 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:35.490 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:35.490 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:35.490 18:31:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 938160 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:35.490 00:06:35.490 real 0m0.603s 00:06:35.490 user 0m0.029s 00:06:35.490 sys 0m0.122s 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:35.490 ************************************ 00:06:35.490 END TEST filesystem_btrfs 00:06:35.490 ************************************ 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.490 ************************************ 00:06:35.490 START TEST filesystem_xfs 00:06:35.490 ************************************ 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:35.490 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:35.490 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:35.490 = sectsz=512 attr=2, projid32bit=1 00:06:35.490 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:35.490 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:35.490 data = bsize=4096 blocks=130560, imaxpct=25 00:06:35.490 = sunit=0 swidth=0 blks 00:06:35.490 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:35.490 log =internal log bsize=4096 blocks=16384, version=2 00:06:35.490 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:35.490 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:36.427 Discarding blocks...Done. 00:06:36.427 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:36.427 18:31:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:38.961 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:38.961 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:38.961 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:38.961 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:38.961 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:38.961 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 938160 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:38.962 00:06:38.962 real 0m3.226s 00:06:38.962 user 0m0.019s 00:06:38.962 sys 0m0.076s 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:38.962 ************************************ 00:06:38.962 END TEST filesystem_xfs 00:06:38.962 ************************************ 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:38.962 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:39.220 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:39.220 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:39.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:39.220 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 938160 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 938160 ']' 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 938160 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 938160 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 938160' 00:06:39.221 killing process with pid 938160 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 938160 00:06:39.221 18:31:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 938160 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:39.789 00:06:39.789 real 0m12.477s 00:06:39.789 user 0m49.078s 00:06:39.789 sys 0m1.201s 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.789 ************************************ 00:06:39.789 END TEST nvmf_filesystem_no_in_capsule 00:06:39.789 ************************************ 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.789 ************************************ 00:06:39.789 START TEST nvmf_filesystem_in_capsule 00:06:39.789 ************************************ 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=940464 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 940464 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 940464 ']' 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.789 18:31:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.789 [2024-07-15 18:31:56.346566] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:06:39.789 [2024-07-15 18:31:56.346604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.789 [2024-07-15 18:31:56.403310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.789 [2024-07-15 18:31:56.483625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.789 [2024-07-15 18:31:56.483661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.789 [2024-07-15 18:31:56.483669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.789 [2024-07-15 18:31:56.483675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.789 [2024-07-15 18:31:56.483680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.789 [2024-07-15 18:31:56.483722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.789 [2024-07-15 18:31:56.483816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.789 [2024-07-15 18:31:56.483901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.790 [2024-07-15 18:31:56.483902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.795 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.795 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:40.795 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:40.795 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 [2024-07-15 18:31:57.202114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 Malloc1 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 [2024-07-15 18:31:57.344081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:40.796 { 00:06:40.796 "name": "Malloc1", 00:06:40.796 "aliases": [ 00:06:40.796 "5c42cb95-a91a-4d97-b6e7-b5aa515d7d41" 00:06:40.796 ], 00:06:40.796 "product_name": "Malloc disk", 00:06:40.796 "block_size": 512, 00:06:40.796 "num_blocks": 1048576, 00:06:40.796 "uuid": "5c42cb95-a91a-4d97-b6e7-b5aa515d7d41", 00:06:40.796 "assigned_rate_limits": { 00:06:40.796 "rw_ios_per_sec": 0, 00:06:40.796 "rw_mbytes_per_sec": 0, 00:06:40.796 "r_mbytes_per_sec": 0, 00:06:40.796 "w_mbytes_per_sec": 0 00:06:40.796 }, 00:06:40.796 "claimed": true, 00:06:40.796 "claim_type": "exclusive_write", 00:06:40.796 "zoned": false, 00:06:40.796 "supported_io_types": { 00:06:40.796 "read": true, 00:06:40.796 "write": true, 00:06:40.796 "unmap": true, 00:06:40.796 "flush": true, 00:06:40.796 "reset": true, 00:06:40.796 "nvme_admin": false, 00:06:40.796 "nvme_io": false, 00:06:40.796 "nvme_io_md": false, 00:06:40.796 "write_zeroes": true, 00:06:40.796 "zcopy": true, 00:06:40.796 "get_zone_info": false, 00:06:40.796 "zone_management": false, 00:06:40.796 "zone_append": false, 00:06:40.796 "compare": false, 00:06:40.796 "compare_and_write": false, 00:06:40.796 "abort": true, 00:06:40.796 "seek_hole": false, 00:06:40.796 "seek_data": false, 00:06:40.796 "copy": true, 00:06:40.796 "nvme_iov_md": false 00:06:40.796 }, 00:06:40.796 "memory_domains": [ 00:06:40.796 { 00:06:40.796 "dma_device_id": "system", 00:06:40.796 "dma_device_type": 1 00:06:40.796 }, 00:06:40.796 { 00:06:40.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.796 "dma_device_type": 2 00:06:40.796 } 00:06:40.796 ], 00:06:40.796 "driver_specific": {} 00:06:40.796 } 00:06:40.796 ]' 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:40.796 18:31:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:42.175 18:31:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:42.175 18:31:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:42.175 18:31:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:42.175 18:31:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:42.175 18:31:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:44.089 18:32:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:44.346 18:32:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:44.913 18:32:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:45.849 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:45.849 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:45.849 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.849 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.849 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.108 ************************************ 00:06:46.108 START TEST filesystem_in_capsule_ext4 00:06:46.108 ************************************ 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:46.108 18:32:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:46.108 mke2fs 1.46.5 (30-Dec-2021) 00:06:46.108 Discarding device blocks: 0/522240 done 00:06:46.108 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:46.108 Filesystem UUID: 0f932431-4d0c-45d7-9dfd-e5f67818b72d 00:06:46.108 Superblock backups stored on blocks: 00:06:46.108 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:46.108 00:06:46.108 Allocating group tables: 0/64 done 00:06:46.108 Writing inode tables: 0/64 done 00:06:46.367 Creating journal (8192 blocks): done 00:06:46.367 Writing superblocks and filesystem accounting information: 0/64 done 00:06:46.367 00:06:46.367 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:46.367 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 940464 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:46.626 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:46.626 00:06:46.627 real 0m0.631s 00:06:46.627 user 0m0.026s 00:06:46.627 sys 0m0.065s 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:46.627 ************************************ 00:06:46.627 END TEST filesystem_in_capsule_ext4 00:06:46.627 ************************************ 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.627 ************************************ 00:06:46.627 START TEST filesystem_in_capsule_btrfs 00:06:46.627 ************************************ 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:46.627 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:46.899 btrfs-progs v6.6.2 00:06:46.899 See https://btrfs.readthedocs.io for more information. 00:06:46.899 00:06:46.899 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:46.899 NOTE: several default settings have changed in version 5.15, please make sure 00:06:46.899 this does not affect your deployments: 00:06:46.899 - DUP for metadata (-m dup) 00:06:46.899 - enabled no-holes (-O no-holes) 00:06:46.899 - enabled free-space-tree (-R free-space-tree) 00:06:46.899 00:06:46.899 Label: (null) 00:06:46.899 UUID: cfc40db1-b428-4948-92fa-e12995da475d 00:06:46.899 Node size: 16384 00:06:46.899 Sector size: 4096 00:06:46.899 Filesystem size: 510.00MiB 00:06:46.899 Block group profiles: 00:06:46.899 Data: single 8.00MiB 00:06:46.899 Metadata: DUP 32.00MiB 00:06:46.899 System: DUP 8.00MiB 00:06:46.899 SSD detected: yes 00:06:46.899 Zoned device: no 00:06:46.899 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:46.899 Runtime features: free-space-tree 00:06:46.899 Checksum: crc32c 00:06:46.899 Number of devices: 1 00:06:46.899 Devices: 00:06:46.899 ID SIZE PATH 00:06:46.899 1 510.00MiB /dev/nvme0n1p1 00:06:46.899 00:06:46.899 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:46.900 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.468 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.468 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:47.468 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.468 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:47.468 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:47.468 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 940464 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.469 00:06:47.469 real 0m0.645s 00:06:47.469 user 0m0.021s 00:06:47.469 sys 0m0.131s 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:47.469 ************************************ 00:06:47.469 END TEST filesystem_in_capsule_btrfs 00:06:47.469 ************************************ 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.469 18:32:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.469 ************************************ 00:06:47.469 START TEST filesystem_in_capsule_xfs 00:06:47.469 ************************************ 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:47.469 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:47.469 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:47.469 = sectsz=512 attr=2, projid32bit=1 00:06:47.469 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:47.469 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:47.469 data = bsize=4096 blocks=130560, imaxpct=25 00:06:47.469 = sunit=0 swidth=0 blks 00:06:47.469 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:47.469 log =internal log bsize=4096 blocks=16384, version=2 00:06:47.469 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:47.469 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:48.406 Discarding blocks...Done. 00:06:48.406 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:48.406 18:32:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 940464 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.937 00:06:50.937 real 0m3.421s 00:06:50.937 user 0m0.030s 00:06:50.937 sys 0m0.063s 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:50.937 ************************************ 00:06:50.937 END TEST filesystem_in_capsule_xfs 00:06:50.937 ************************************ 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:50.937 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:51.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.195 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 940464 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 940464 ']' 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 940464 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940464 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940464' 00:06:51.453 killing process with pid 940464 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 940464 00:06:51.453 18:32:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 940464 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:51.712 00:06:51.712 real 0m11.997s 00:06:51.712 user 0m47.124s 00:06:51.712 sys 0m1.209s 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.712 ************************************ 00:06:51.712 END TEST nvmf_filesystem_in_capsule 00:06:51.712 ************************************ 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:51.712 rmmod nvme_tcp 00:06:51.712 rmmod nvme_fabrics 00:06:51.712 rmmod nvme_keyring 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.712 18:32:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.277 18:32:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:54.277 00:06:54.277 real 0m32.362s 00:06:54.277 user 1m37.895s 00:06:54.277 sys 0m6.617s 00:06:54.277 18:32:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.277 18:32:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.277 ************************************ 00:06:54.277 END TEST nvmf_filesystem 00:06:54.277 ************************************ 00:06:54.277 18:32:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:54.277 18:32:10 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:54.277 18:32:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.277 18:32:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.277 18:32:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.277 ************************************ 00:06:54.277 START TEST nvmf_target_discovery 00:06:54.277 ************************************ 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:54.277 * Looking for test storage... 00:06:54.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.277 18:32:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:54.278 18:32:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:59.555 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:59.555 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:59.555 Found net devices under 0000:86:00.0: cvl_0_0 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:59.555 Found net devices under 0000:86:00.1: cvl_0_1 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.555 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:59.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:06:59.556 00:06:59.556 --- 10.0.0.2 ping statistics --- 00:06:59.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.556 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:59.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:06:59.556 00:06:59.556 --- 10.0.0.1 ping statistics --- 00:06:59.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.556 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=946052 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 946052 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 946052 ']' 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.556 18:32:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.556 [2024-07-15 18:32:15.965568] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:06:59.556 [2024-07-15 18:32:15.965610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.556 [2024-07-15 18:32:16.022341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.556 [2024-07-15 18:32:16.102491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.556 [2024-07-15 18:32:16.102526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.556 [2024-07-15 18:32:16.102534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.556 [2024-07-15 18:32:16.102540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.556 [2024-07-15 18:32:16.102545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.556 [2024-07-15 18:32:16.102587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.556 [2024-07-15 18:32:16.102681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.556 [2024-07-15 18:32:16.102773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.556 [2024-07-15 18:32:16.102775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.125 [2024-07-15 18:32:16.814276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.125 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.386 Null1 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.386 [2024-07-15 18:32:16.859683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.386 Null2 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.386 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 Null3 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 Null4 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 18:32:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:00.387 00:07:00.387 Discovery Log Number of Records 6, Generation counter 6 00:07:00.387 =====Discovery Log Entry 0====== 00:07:00.387 trtype: tcp 00:07:00.387 adrfam: ipv4 00:07:00.387 subtype: current discovery subsystem 00:07:00.387 treq: not required 00:07:00.387 portid: 0 00:07:00.387 trsvcid: 4420 00:07:00.387 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:00.387 traddr: 10.0.0.2 00:07:00.387 eflags: explicit discovery connections, duplicate discovery information 00:07:00.387 sectype: none 00:07:00.387 =====Discovery Log Entry 1====== 00:07:00.387 trtype: tcp 00:07:00.387 adrfam: ipv4 00:07:00.387 subtype: nvme subsystem 00:07:00.387 treq: not required 00:07:00.387 portid: 0 00:07:00.387 trsvcid: 4420 00:07:00.387 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:00.387 traddr: 10.0.0.2 00:07:00.387 eflags: none 00:07:00.387 sectype: none 00:07:00.387 =====Discovery Log Entry 2====== 00:07:00.387 trtype: tcp 00:07:00.387 adrfam: ipv4 00:07:00.387 subtype: nvme subsystem 00:07:00.387 treq: not required 00:07:00.387 portid: 0 00:07:00.387 trsvcid: 4420 00:07:00.387 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:00.387 traddr: 10.0.0.2 00:07:00.387 eflags: none 00:07:00.387 sectype: none 00:07:00.387 =====Discovery Log Entry 3====== 00:07:00.387 trtype: tcp 00:07:00.387 adrfam: ipv4 00:07:00.387 subtype: nvme subsystem 00:07:00.387 treq: not required 00:07:00.387 portid: 0 00:07:00.387 trsvcid: 4420 00:07:00.387 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:00.387 traddr: 10.0.0.2 00:07:00.387 eflags: none 00:07:00.387 sectype: none 00:07:00.387 =====Discovery Log Entry 4====== 00:07:00.387 trtype: tcp 00:07:00.387 adrfam: ipv4 00:07:00.387 subtype: nvme subsystem 00:07:00.387 treq: not required 00:07:00.387 portid: 0 00:07:00.387 trsvcid: 4420 00:07:00.387 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:00.387 traddr: 10.0.0.2 00:07:00.387 eflags: none 00:07:00.387 sectype: none 00:07:00.387 =====Discovery Log Entry 5====== 00:07:00.387 trtype: tcp 00:07:00.387 adrfam: ipv4 00:07:00.387 subtype: discovery subsystem referral 00:07:00.387 treq: not required 00:07:00.387 portid: 0 00:07:00.387 trsvcid: 4430 00:07:00.387 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:00.387 traddr: 10.0.0.2 00:07:00.387 eflags: none 00:07:00.387 sectype: none 00:07:00.387 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:00.387 Perform nvmf subsystem discovery via RPC 00:07:00.387 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:00.387 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 [ 00:07:00.387 { 00:07:00.387 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:00.387 "subtype": "Discovery", 00:07:00.387 "listen_addresses": [ 00:07:00.387 { 00:07:00.387 "trtype": "TCP", 00:07:00.387 "adrfam": "IPv4", 00:07:00.387 "traddr": "10.0.0.2", 00:07:00.387 "trsvcid": "4420" 00:07:00.387 } 00:07:00.387 ], 00:07:00.387 "allow_any_host": true, 00:07:00.387 "hosts": [] 00:07:00.387 }, 00:07:00.387 { 00:07:00.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:00.387 "subtype": "NVMe", 00:07:00.387 "listen_addresses": [ 00:07:00.387 { 00:07:00.387 "trtype": "TCP", 00:07:00.387 "adrfam": "IPv4", 00:07:00.387 "traddr": "10.0.0.2", 00:07:00.387 "trsvcid": "4420" 00:07:00.387 } 00:07:00.387 ], 00:07:00.387 "allow_any_host": true, 00:07:00.387 "hosts": [], 00:07:00.387 "serial_number": "SPDK00000000000001", 00:07:00.387 "model_number": "SPDK bdev Controller", 00:07:00.387 "max_namespaces": 32, 00:07:00.387 "min_cntlid": 1, 00:07:00.387 "max_cntlid": 65519, 00:07:00.387 "namespaces": [ 00:07:00.387 { 00:07:00.387 "nsid": 1, 00:07:00.387 "bdev_name": "Null1", 00:07:00.387 "name": "Null1", 00:07:00.387 "nguid": "5615FA9DB2874AA6A3351AC90A8E902B", 00:07:00.387 "uuid": "5615fa9d-b287-4aa6-a335-1ac90a8e902b" 00:07:00.387 } 00:07:00.387 ] 00:07:00.387 }, 00:07:00.387 { 00:07:00.387 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:00.387 "subtype": "NVMe", 00:07:00.387 "listen_addresses": [ 00:07:00.387 { 00:07:00.387 "trtype": "TCP", 00:07:00.387 "adrfam": "IPv4", 00:07:00.387 "traddr": "10.0.0.2", 00:07:00.387 "trsvcid": "4420" 00:07:00.387 } 00:07:00.387 ], 00:07:00.387 "allow_any_host": true, 00:07:00.387 "hosts": [], 00:07:00.387 "serial_number": "SPDK00000000000002", 00:07:00.387 "model_number": "SPDK bdev Controller", 00:07:00.387 "max_namespaces": 32, 00:07:00.387 "min_cntlid": 1, 00:07:00.387 "max_cntlid": 65519, 00:07:00.387 "namespaces": [ 00:07:00.387 { 00:07:00.387 "nsid": 1, 00:07:00.387 "bdev_name": "Null2", 00:07:00.387 "name": "Null2", 00:07:00.387 "nguid": "FDE4713B69764FC1A2EB7770E31C62BF", 00:07:00.387 "uuid": "fde4713b-6976-4fc1-a2eb-7770e31c62bf" 00:07:00.387 } 00:07:00.387 ] 00:07:00.387 }, 00:07:00.387 { 00:07:00.387 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:00.387 "subtype": "NVMe", 00:07:00.387 "listen_addresses": [ 00:07:00.387 { 00:07:00.387 "trtype": "TCP", 00:07:00.387 "adrfam": "IPv4", 00:07:00.387 "traddr": "10.0.0.2", 00:07:00.387 "trsvcid": "4420" 00:07:00.387 } 00:07:00.387 ], 00:07:00.387 "allow_any_host": true, 00:07:00.387 "hosts": [], 00:07:00.387 "serial_number": "SPDK00000000000003", 00:07:00.387 "model_number": "SPDK bdev Controller", 00:07:00.388 "max_namespaces": 32, 00:07:00.388 "min_cntlid": 1, 00:07:00.388 "max_cntlid": 65519, 00:07:00.388 "namespaces": [ 00:07:00.388 { 00:07:00.388 "nsid": 1, 00:07:00.388 "bdev_name": "Null3", 00:07:00.388 "name": "Null3", 00:07:00.388 "nguid": "4CA91149241B410D8D3A2A081C8761BC", 00:07:00.388 "uuid": "4ca91149-241b-410d-8d3a-2a081c8761bc" 00:07:00.388 } 00:07:00.388 ] 00:07:00.388 }, 00:07:00.388 { 00:07:00.388 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:00.388 "subtype": "NVMe", 00:07:00.388 "listen_addresses": [ 00:07:00.388 { 00:07:00.388 "trtype": "TCP", 00:07:00.388 "adrfam": "IPv4", 00:07:00.388 "traddr": "10.0.0.2", 00:07:00.388 "trsvcid": "4420" 00:07:00.388 } 00:07:00.388 ], 00:07:00.388 "allow_any_host": true, 00:07:00.388 "hosts": [], 00:07:00.388 "serial_number": "SPDK00000000000004", 00:07:00.388 "model_number": "SPDK bdev Controller", 00:07:00.388 "max_namespaces": 32, 00:07:00.388 "min_cntlid": 1, 00:07:00.388 "max_cntlid": 65519, 00:07:00.388 "namespaces": [ 00:07:00.388 { 00:07:00.388 "nsid": 1, 00:07:00.388 "bdev_name": "Null4", 00:07:00.388 "name": "Null4", 00:07:00.388 "nguid": "96F760EB81F6446FA6E3402E62542865", 00:07:00.388 "uuid": "96f760eb-81f6-446f-a6e3-402e62542865" 00:07:00.388 } 00:07:00.388 ] 00:07:00.388 } 00:07:00.388 ] 00:07:00.388 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.388 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.663 rmmod nvme_tcp 00:07:00.663 rmmod nvme_fabrics 00:07:00.663 rmmod nvme_keyring 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 946052 ']' 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 946052 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 946052 ']' 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 946052 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.663 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946052 00:07:00.664 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.664 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.664 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946052' 00:07:00.664 killing process with pid 946052 00:07:00.664 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 946052 00:07:00.664 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 946052 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.923 18:32:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.462 18:32:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:03.462 00:07:03.462 real 0m9.034s 00:07:03.462 user 0m7.183s 00:07:03.462 sys 0m4.310s 00:07:03.462 18:32:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.462 18:32:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.462 ************************************ 00:07:03.462 END TEST nvmf_target_discovery 00:07:03.462 ************************************ 00:07:03.462 18:32:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:03.462 18:32:19 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:03.462 18:32:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.462 18:32:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.462 18:32:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.462 ************************************ 00:07:03.462 START TEST nvmf_referrals 00:07:03.462 ************************************ 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:03.462 * Looking for test storage... 00:07:03.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.462 18:32:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:08.737 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:08.737 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:08.737 Found net devices under 0000:86:00.0: cvl_0_0 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:08.737 Found net devices under 0000:86:00.1: cvl_0_1 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.737 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.738 18:32:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:07:08.738 00:07:08.738 --- 10.0.0.2 ping statistics --- 00:07:08.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.738 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:07:08.738 00:07:08.738 --- 10.0.0.1 ping statistics --- 00:07:08.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.738 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=949824 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 949824 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 949824 ']' 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.738 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:08.738 [2024-07-15 18:32:25.149463] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:07:08.738 [2024-07-15 18:32:25.149507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.738 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.738 [2024-07-15 18:32:25.208177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.738 [2024-07-15 18:32:25.288997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.738 [2024-07-15 18:32:25.289031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.738 [2024-07-15 18:32:25.289038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.738 [2024-07-15 18:32:25.289044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.738 [2024-07-15 18:32:25.289049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.738 [2024-07-15 18:32:25.289104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.738 [2024-07-15 18:32:25.289129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.738 [2024-07-15 18:32:25.289242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.738 [2024-07-15 18:32:25.289243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.329 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.329 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:09.329 18:32:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.329 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:09.329 18:32:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.329 [2024-07-15 18:32:26.014332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.329 [2024-07-15 18:32:26.027662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.329 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.587 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.845 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:09.846 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.104 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:10.104 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:10.104 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.105 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.363 18:32:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:10.363 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.622 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:10.881 rmmod nvme_tcp 00:07:10.881 rmmod nvme_fabrics 00:07:10.881 rmmod nvme_keyring 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 949824 ']' 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 949824 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 949824 ']' 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 949824 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.881 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949824 00:07:11.140 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.140 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.140 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949824' 00:07:11.140 killing process with pid 949824 00:07:11.140 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 949824 00:07:11.140 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 949824 00:07:11.140 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.140 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.141 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.141 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.141 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.141 18:32:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.141 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.141 18:32:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.680 18:32:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:13.680 00:07:13.680 real 0m10.238s 00:07:13.680 user 0m12.355s 00:07:13.680 sys 0m4.637s 00:07:13.680 18:32:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.680 18:32:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.680 ************************************ 00:07:13.680 END TEST nvmf_referrals 00:07:13.680 ************************************ 00:07:13.680 18:32:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:13.680 18:32:29 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:13.680 18:32:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.680 18:32:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.680 18:32:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.680 ************************************ 00:07:13.680 START TEST nvmf_connect_disconnect 00:07:13.680 ************************************ 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:13.680 * Looking for test storage... 00:07:13.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.680 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.681 18:32:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.681 18:32:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:13.681 18:32:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:13.681 18:32:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:13.681 18:32:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:17.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:17.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:17.876 Found net devices under 0000:86:00.0: cvl_0_0 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:17.876 Found net devices under 0000:86:00.1: cvl_0_1 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.876 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.877 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.877 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:07:18.136 00:07:18.136 --- 10.0.0.2 ping statistics --- 00:07:18.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.136 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:18.136 00:07:18.136 --- 10.0.0.1 ping statistics --- 00:07:18.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.136 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=953678 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 953678 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 953678 ']' 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.136 18:32:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:18.136 [2024-07-15 18:32:34.815256] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:07:18.136 [2024-07-15 18:32:34.815302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.395 [2024-07-15 18:32:34.873845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.395 [2024-07-15 18:32:34.955355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.395 [2024-07-15 18:32:34.955389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.395 [2024-07-15 18:32:34.955396] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.395 [2024-07-15 18:32:34.955402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.395 [2024-07-15 18:32:34.955407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.395 [2024-07-15 18:32:34.955455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.395 [2024-07-15 18:32:34.955550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.395 [2024-07-15 18:32:34.955568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.395 [2024-07-15 18:32:34.955569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.963 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.963 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:18.963 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:18.963 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.963 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 [2024-07-15 18:32:35.678131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 [2024-07-15 18:32:35.729748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:19.221 18:32:35 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:22.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:35.743 18:32:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:35.743 rmmod nvme_tcp 00:07:35.743 rmmod nvme_fabrics 00:07:35.743 rmmod nvme_keyring 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 953678 ']' 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 953678 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 953678 ']' 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 953678 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953678 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953678' 00:07:35.743 killing process with pid 953678 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 953678 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 953678 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.743 18:32:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.653 18:32:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.653 00:07:37.653 real 0m24.416s 00:07:37.653 user 1m10.091s 00:07:37.653 sys 0m4.686s 00:07:37.653 18:32:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.653 18:32:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.653 ************************************ 00:07:37.653 END TEST nvmf_connect_disconnect 00:07:37.653 ************************************ 00:07:37.912 18:32:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:37.912 18:32:54 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:37.912 18:32:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.912 18:32:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.912 18:32:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.912 ************************************ 00:07:37.912 START TEST nvmf_multitarget 00:07:37.912 ************************************ 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:37.912 * Looking for test storage... 00:07:37.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.912 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.913 18:32:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.190 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:43.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:43.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:43.191 Found net devices under 0000:86:00.0: cvl_0_0 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:43.191 Found net devices under 0000:86:00.1: cvl_0_1 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:07:43.191 00:07:43.191 --- 10.0.0.2 ping statistics --- 00:07:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.191 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:43.191 00:07:43.191 --- 10.0.0.1 ping statistics --- 00:07:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.191 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=960067 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 960067 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 960067 ']' 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.191 18:32:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:43.191 [2024-07-15 18:32:59.784856] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:07:43.191 [2024-07-15 18:32:59.784898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.191 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.191 [2024-07-15 18:32:59.840054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.451 [2024-07-15 18:32:59.921122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.451 [2024-07-15 18:32:59.921155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.451 [2024-07-15 18:32:59.921163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.451 [2024-07-15 18:32:59.921169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.451 [2024-07-15 18:32:59.921173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.451 [2024-07-15 18:32:59.921233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.451 [2024-07-15 18:32:59.921296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.451 [2024-07-15 18:32:59.921360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.451 [2024-07-15 18:32:59.921361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:44.019 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:44.279 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:44.279 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:44.279 "nvmf_tgt_1" 00:07:44.279 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:44.279 "nvmf_tgt_2" 00:07:44.279 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:44.279 18:33:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:44.538 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:44.539 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:44.539 true 00:07:44.539 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:44.798 true 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.798 rmmod nvme_tcp 00:07:44.798 rmmod nvme_fabrics 00:07:44.798 rmmod nvme_keyring 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 960067 ']' 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 960067 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 960067 ']' 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 960067 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 960067 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 960067' 00:07:44.798 killing process with pid 960067 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 960067 00:07:44.798 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 960067 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.058 18:33:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.594 18:33:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:47.594 00:07:47.594 real 0m9.328s 00:07:47.594 user 0m9.151s 00:07:47.594 sys 0m4.396s 00:07:47.594 18:33:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.594 18:33:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:47.594 ************************************ 00:07:47.594 END TEST nvmf_multitarget 00:07:47.594 ************************************ 00:07:47.594 18:33:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:47.594 18:33:03 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:47.594 18:33:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.594 18:33:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.594 18:33:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.594 ************************************ 00:07:47.594 START TEST nvmf_rpc 00:07:47.594 ************************************ 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:47.594 * Looking for test storage... 00:07:47.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.594 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.595 18:33:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:52.912 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:52.913 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:52.913 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:52.913 Found net devices under 0000:86:00.0: cvl_0_0 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:52.913 Found net devices under 0000:86:00.1: cvl_0_1 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:07:52.913 00:07:52.913 --- 10.0.0.2 ping statistics --- 00:07:52.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.913 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:52.913 00:07:52.913 --- 10.0.0.1 ping statistics --- 00:07:52.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.913 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=964247 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 964247 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 964247 ']' 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.913 18:33:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.913 [2024-07-15 18:33:09.384662] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:07:52.913 [2024-07-15 18:33:09.384704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.913 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.913 [2024-07-15 18:33:09.442005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.913 [2024-07-15 18:33:09.522382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.913 [2024-07-15 18:33:09.522417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.913 [2024-07-15 18:33:09.522424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.913 [2024-07-15 18:33:09.522430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.913 [2024-07-15 18:33:09.522435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.913 [2024-07-15 18:33:09.522487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.913 [2024-07-15 18:33:09.522579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.913 [2024-07-15 18:33:09.522665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.913 [2024-07-15 18:33:09.522666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:53.853 "tick_rate": 2300000000, 00:07:53.853 "poll_groups": [ 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_000", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [] 00:07:53.853 }, 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_001", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [] 00:07:53.853 }, 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_002", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [] 00:07:53.853 }, 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_003", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [] 00:07:53.853 } 00:07:53.853 ] 00:07:53.853 }' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.853 [2024-07-15 18:33:10.356701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:53.853 "tick_rate": 2300000000, 00:07:53.853 "poll_groups": [ 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_000", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [ 00:07:53.853 { 00:07:53.853 "trtype": "TCP" 00:07:53.853 } 00:07:53.853 ] 00:07:53.853 }, 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_001", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [ 00:07:53.853 { 00:07:53.853 "trtype": "TCP" 00:07:53.853 } 00:07:53.853 ] 00:07:53.853 }, 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_002", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [ 00:07:53.853 { 00:07:53.853 "trtype": "TCP" 00:07:53.853 } 00:07:53.853 ] 00:07:53.853 }, 00:07:53.853 { 00:07:53.853 "name": "nvmf_tgt_poll_group_003", 00:07:53.853 "admin_qpairs": 0, 00:07:53.853 "io_qpairs": 0, 00:07:53.853 "current_admin_qpairs": 0, 00:07:53.853 "current_io_qpairs": 0, 00:07:53.853 "pending_bdev_io": 0, 00:07:53.853 "completed_nvme_io": 0, 00:07:53.853 "transports": [ 00:07:53.853 { 00:07:53.853 "trtype": "TCP" 00:07:53.853 } 00:07:53.853 ] 00:07:53.853 } 00:07:53.853 ] 00:07:53.853 }' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:53.853 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.854 Malloc1 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.854 [2024-07-15 18:33:10.512587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:53.854 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:53.854 [2024-07-15 18:33:10.541170] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:54.113 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:54.113 could not add new controller: failed to write to nvme-fabrics device 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.113 18:33:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.049 18:33:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.049 18:33:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.049 18:33:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.049 18:33:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:55.049 18:33:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.584 [2024-07-15 18:33:13.892992] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:57.584 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:57.584 could not add new controller: failed to write to nvme-fabrics device 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.584 18:33:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:58.521 18:33:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.521 18:33:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:58.521 18:33:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.521 18:33:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:58.521 18:33:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:00.426 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:00.426 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:00.426 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.426 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:00.426 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.426 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:00.426 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 [2024-07-15 18:33:17.281655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.683 18:33:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.058 18:33:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.058 18:33:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:02.058 18:33:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.058 18:33:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:02.058 18:33:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 [2024-07-15 18:33:20.610762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.961 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.962 18:33:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:05.337 18:33:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.337 18:33:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:05.337 18:33:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.337 18:33:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:05.337 18:33:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.280 [2024-07-15 18:33:23.949380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.280 18:33:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:08.656 18:33:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:08.656 18:33:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:08.656 18:33:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:08.656 18:33:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:08.656 18:33:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.602 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 [2024-07-15 18:33:27.343756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.861 18:33:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.798 18:33:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.798 18:33:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:11.798 18:33:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.798 18:33:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:11.798 18:33:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:14.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.342 [2024-07-15 18:33:30.628741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.342 18:33:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.280 18:33:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.280 18:33:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:15.280 18:33:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.280 18:33:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:15.280 18:33:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:17.184 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:17.184 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:17.184 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.184 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:17.184 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.184 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:17.184 18:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.444 18:33:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:17.444 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:17.444 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 [2024-07-15 18:33:34.022067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 [2024-07-15 18:33:34.070179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 [2024-07-15 18:33:34.122318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.445 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 [2024-07-15 18:33:34.170483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.704 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 [2024-07-15 18:33:34.218649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:17.705 "tick_rate": 2300000000, 00:08:17.705 "poll_groups": [ 00:08:17.705 { 00:08:17.705 "name": "nvmf_tgt_poll_group_000", 00:08:17.705 "admin_qpairs": 2, 00:08:17.705 "io_qpairs": 168, 00:08:17.705 "current_admin_qpairs": 0, 00:08:17.705 "current_io_qpairs": 0, 00:08:17.705 "pending_bdev_io": 0, 00:08:17.705 "completed_nvme_io": 317, 00:08:17.705 "transports": [ 00:08:17.705 { 00:08:17.705 "trtype": "TCP" 00:08:17.705 } 00:08:17.705 ] 00:08:17.705 }, 00:08:17.705 { 00:08:17.705 "name": "nvmf_tgt_poll_group_001", 00:08:17.705 "admin_qpairs": 2, 00:08:17.705 "io_qpairs": 168, 00:08:17.705 "current_admin_qpairs": 0, 00:08:17.705 "current_io_qpairs": 0, 00:08:17.705 "pending_bdev_io": 0, 00:08:17.705 "completed_nvme_io": 268, 00:08:17.705 "transports": [ 00:08:17.705 { 00:08:17.705 "trtype": "TCP" 00:08:17.705 } 00:08:17.705 ] 00:08:17.705 }, 00:08:17.705 { 00:08:17.705 "name": "nvmf_tgt_poll_group_002", 00:08:17.705 "admin_qpairs": 1, 00:08:17.705 "io_qpairs": 168, 00:08:17.705 "current_admin_qpairs": 0, 00:08:17.705 "current_io_qpairs": 0, 00:08:17.705 "pending_bdev_io": 0, 00:08:17.705 "completed_nvme_io": 219, 00:08:17.705 "transports": [ 00:08:17.705 { 00:08:17.705 "trtype": "TCP" 00:08:17.705 } 00:08:17.705 ] 00:08:17.705 }, 00:08:17.705 { 00:08:17.705 "name": "nvmf_tgt_poll_group_003", 00:08:17.705 "admin_qpairs": 2, 00:08:17.705 "io_qpairs": 168, 00:08:17.705 "current_admin_qpairs": 0, 00:08:17.705 "current_io_qpairs": 0, 00:08:17.705 "pending_bdev_io": 0, 00:08:17.705 "completed_nvme_io": 218, 00:08:17.705 "transports": [ 00:08:17.705 { 00:08:17.705 "trtype": "TCP" 00:08:17.705 } 00:08:17.705 ] 00:08:17.705 } 00:08:17.705 ] 00:08:17.705 }' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.705 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.705 rmmod nvme_tcp 00:08:17.705 rmmod nvme_fabrics 00:08:17.705 rmmod nvme_keyring 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 964247 ']' 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 964247 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 964247 ']' 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 964247 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 964247 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 964247' 00:08:17.965 killing process with pid 964247 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 964247 00:08:17.965 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 964247 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.224 18:33:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.131 18:33:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.131 00:08:20.131 real 0m32.951s 00:08:20.131 user 1m41.880s 00:08:20.131 sys 0m5.861s 00:08:20.131 18:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.131 18:33:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.131 ************************************ 00:08:20.131 END TEST nvmf_rpc 00:08:20.131 ************************************ 00:08:20.131 18:33:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:20.131 18:33:36 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:20.131 18:33:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:20.131 18:33:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.131 18:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.131 ************************************ 00:08:20.131 START TEST nvmf_invalid 00:08:20.131 ************************************ 00:08:20.131 18:33:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:20.390 * Looking for test storage... 00:08:20.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.391 18:33:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.669 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:25.670 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.670 18:33:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:25.670 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:25.670 Found net devices under 0000:86:00.0: cvl_0_0 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:25.670 Found net devices under 0000:86:00.1: cvl_0_1 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:08:25.670 00:08:25.670 --- 10.0.0.2 ping statistics --- 00:08:25.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.670 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:08:25.670 00:08:25.670 --- 10.0.0.1 ping statistics --- 00:08:25.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.670 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=972106 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 972106 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 972106 ']' 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.670 18:33:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:25.670 [2024-07-15 18:33:42.346190] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:08:25.670 [2024-07-15 18:33:42.346245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.670 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.930 [2024-07-15 18:33:42.405373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.930 [2024-07-15 18:33:42.486941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.930 [2024-07-15 18:33:42.486977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.930 [2024-07-15 18:33:42.486984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.930 [2024-07-15 18:33:42.486990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.930 [2024-07-15 18:33:42.486995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.930 [2024-07-15 18:33:42.487035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.930 [2024-07-15 18:33:42.487129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.930 [2024-07-15 18:33:42.487215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.930 [2024-07-15 18:33:42.487216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:26.497 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4138 00:08:26.766 [2024-07-15 18:33:43.348651] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:26.766 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:26.766 { 00:08:26.766 "nqn": "nqn.2016-06.io.spdk:cnode4138", 00:08:26.766 "tgt_name": "foobar", 00:08:26.766 "method": "nvmf_create_subsystem", 00:08:26.766 "req_id": 1 00:08:26.766 } 00:08:26.766 Got JSON-RPC error response 00:08:26.766 response: 00:08:26.766 { 00:08:26.766 "code": -32603, 00:08:26.766 "message": "Unable to find target foobar" 00:08:26.766 }' 00:08:26.766 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:26.766 { 00:08:26.766 "nqn": "nqn.2016-06.io.spdk:cnode4138", 00:08:26.766 "tgt_name": "foobar", 00:08:26.766 "method": "nvmf_create_subsystem", 00:08:26.766 "req_id": 1 00:08:26.766 } 00:08:26.766 Got JSON-RPC error response 00:08:26.766 response: 00:08:26.766 { 00:08:26.766 "code": -32603, 00:08:26.766 "message": "Unable to find target foobar" 00:08:26.766 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:26.766 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:26.766 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18141 00:08:27.026 [2024-07-15 18:33:43.541382] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18141: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:27.026 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:27.026 { 00:08:27.026 "nqn": "nqn.2016-06.io.spdk:cnode18141", 00:08:27.026 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:27.026 "method": "nvmf_create_subsystem", 00:08:27.026 "req_id": 1 00:08:27.026 } 00:08:27.026 Got JSON-RPC error response 00:08:27.026 response: 00:08:27.026 { 00:08:27.026 "code": -32602, 00:08:27.026 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:27.026 }' 00:08:27.026 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:27.026 { 00:08:27.026 "nqn": "nqn.2016-06.io.spdk:cnode18141", 00:08:27.026 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:27.026 "method": "nvmf_create_subsystem", 00:08:27.026 "req_id": 1 00:08:27.026 } 00:08:27.026 Got JSON-RPC error response 00:08:27.026 response: 00:08:27.026 { 00:08:27.026 "code": -32602, 00:08:27.026 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:27.026 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:27.026 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:27.026 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6531 00:08:27.285 [2024-07-15 18:33:43.738021] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6531: invalid model number 'SPDK_Controller' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:27.285 { 00:08:27.285 "nqn": "nqn.2016-06.io.spdk:cnode6531", 00:08:27.285 "model_number": "SPDK_Controller\u001f", 00:08:27.285 "method": "nvmf_create_subsystem", 00:08:27.285 "req_id": 1 00:08:27.285 } 00:08:27.285 Got JSON-RPC error response 00:08:27.285 response: 00:08:27.285 { 00:08:27.285 "code": -32602, 00:08:27.285 "message": "Invalid MN SPDK_Controller\u001f" 00:08:27.285 }' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:27.285 { 00:08:27.285 "nqn": "nqn.2016-06.io.spdk:cnode6531", 00:08:27.285 "model_number": "SPDK_Controller\u001f", 00:08:27.285 "method": "nvmf_create_subsystem", 00:08:27.285 "req_id": 1 00:08:27.285 } 00:08:27.285 Got JSON-RPC error response 00:08:27.285 response: 00:08:27.285 { 00:08:27.285 "code": -32602, 00:08:27.285 "message": "Invalid MN SPDK_Controller\u001f" 00:08:27.285 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.285 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:08:27.286 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'T?u/>R6>H8G]M=~<;zU[a' 00:08:27.286 18:33:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'T?u/>R6>H8G]M=~<;zU[a' nqn.2016-06.io.spdk:cnode24370 00:08:27.544 [2024-07-15 18:33:44.059118] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24370: invalid serial number 'T?u/>R6>H8G]M=~<;zU[a' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:27.544 { 00:08:27.544 "nqn": "nqn.2016-06.io.spdk:cnode24370", 00:08:27.544 "serial_number": "T?u/>R6>H8G]M=~<;zU[a", 00:08:27.544 "method": "nvmf_create_subsystem", 00:08:27.544 "req_id": 1 00:08:27.544 } 00:08:27.544 Got JSON-RPC error response 00:08:27.544 response: 00:08:27.544 { 00:08:27.544 "code": -32602, 00:08:27.544 "message": "Invalid SN T?u/>R6>H8G]M=~<;zU[a" 00:08:27.544 }' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:27.544 { 00:08:27.544 "nqn": "nqn.2016-06.io.spdk:cnode24370", 00:08:27.544 "serial_number": "T?u/>R6>H8G]M=~<;zU[a", 00:08:27.544 "method": "nvmf_create_subsystem", 00:08:27.544 "req_id": 1 00:08:27.544 } 00:08:27.544 Got JSON-RPC error response 00:08:27.544 response: 00:08:27.544 { 00:08:27.544 "code": -32602, 00:08:27.544 "message": "Invalid SN T?u/>R6>H8G]M=~<;zU[a" 00:08:27.544 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:27.544 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:27.545 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:27.803 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.803 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.803 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:27.803 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:27.803 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '3;Oe\2YM;DKFJa&6>Z]6UZnIe::Z59DCLY;Z=@*=' 00:08:27.804 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '3;Oe\2YM;DKFJa&6>Z]6UZnIe::Z59DCLY;Z=@*=' nqn.2016-06.io.spdk:cnode18462 00:08:27.804 [2024-07-15 18:33:44.504644] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18462: invalid model number '3;Oe\2YM;DKFJa&6>Z]6UZnIe::Z59DCLY;Z=@*=' 00:08:28.063 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:28.063 { 00:08:28.063 "nqn": "nqn.2016-06.io.spdk:cnode18462", 00:08:28.063 "model_number": "3;Oe\\2YM;DKFJa&6>Z]6UZnIe::Z59DCLY;Z\u007f=@*=", 00:08:28.063 "method": "nvmf_create_subsystem", 00:08:28.063 "req_id": 1 00:08:28.063 } 00:08:28.063 Got JSON-RPC error response 00:08:28.063 response: 00:08:28.063 { 00:08:28.063 "code": -32602, 00:08:28.063 "message": "Invalid MN 3;Oe\\2YM;DKFJa&6>Z]6UZnIe::Z59DCLY;Z\u007f=@*=" 00:08:28.063 }' 00:08:28.063 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:28.063 { 00:08:28.063 "nqn": "nqn.2016-06.io.spdk:cnode18462", 00:08:28.063 "model_number": "3;Oe\\2YM;DKFJa&6>Z]6UZnIe::Z59DCLY;Z\u007f=@*=", 00:08:28.063 "method": "nvmf_create_subsystem", 00:08:28.063 "req_id": 1 00:08:28.063 } 00:08:28.063 Got JSON-RPC error response 00:08:28.063 response: 00:08:28.063 { 00:08:28.063 "code": -32602, 00:08:28.063 "message": "Invalid MN 3;Oe\\2YM;DKFJa&6>Z]6UZnIe::Z59DCLY;Z\u007f=@*=" 00:08:28.063 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:28.063 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:28.063 [2024-07-15 18:33:44.689342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.063 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:28.321 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:28.321 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:28.321 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:28.321 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:28.321 18:33:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:28.580 [2024-07-15 18:33:45.078664] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:28.580 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:28.580 { 00:08:28.580 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:28.580 "listen_address": { 00:08:28.580 "trtype": "tcp", 00:08:28.580 "traddr": "", 00:08:28.580 "trsvcid": "4421" 00:08:28.580 }, 00:08:28.580 "method": "nvmf_subsystem_remove_listener", 00:08:28.580 "req_id": 1 00:08:28.580 } 00:08:28.580 Got JSON-RPC error response 00:08:28.580 response: 00:08:28.580 { 00:08:28.580 "code": -32602, 00:08:28.580 "message": "Invalid parameters" 00:08:28.580 }' 00:08:28.580 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:28.580 { 00:08:28.580 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:28.580 "listen_address": { 00:08:28.580 "trtype": "tcp", 00:08:28.580 "traddr": "", 00:08:28.580 "trsvcid": "4421" 00:08:28.580 }, 00:08:28.580 "method": "nvmf_subsystem_remove_listener", 00:08:28.580 "req_id": 1 00:08:28.580 } 00:08:28.580 Got JSON-RPC error response 00:08:28.580 response: 00:08:28.580 { 00:08:28.580 "code": -32602, 00:08:28.580 "message": "Invalid parameters" 00:08:28.580 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:28.580 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4479 -i 0 00:08:28.580 [2024-07-15 18:33:45.275283] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4479: invalid cntlid range [0-65519] 00:08:28.838 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:28.838 { 00:08:28.838 "nqn": "nqn.2016-06.io.spdk:cnode4479", 00:08:28.838 "min_cntlid": 0, 00:08:28.838 "method": "nvmf_create_subsystem", 00:08:28.838 "req_id": 1 00:08:28.838 } 00:08:28.838 Got JSON-RPC error response 00:08:28.838 response: 00:08:28.838 { 00:08:28.838 "code": -32602, 00:08:28.838 "message": "Invalid cntlid range [0-65519]" 00:08:28.838 }' 00:08:28.838 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:28.838 { 00:08:28.838 "nqn": "nqn.2016-06.io.spdk:cnode4479", 00:08:28.838 "min_cntlid": 0, 00:08:28.838 "method": "nvmf_create_subsystem", 00:08:28.838 "req_id": 1 00:08:28.838 } 00:08:28.838 Got JSON-RPC error response 00:08:28.838 response: 00:08:28.838 { 00:08:28.838 "code": -32602, 00:08:28.838 "message": "Invalid cntlid range [0-65519]" 00:08:28.838 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:28.838 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3924 -i 65520 00:08:28.838 [2024-07-15 18:33:45.459936] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3924: invalid cntlid range [65520-65519] 00:08:28.838 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:28.838 { 00:08:28.838 "nqn": "nqn.2016-06.io.spdk:cnode3924", 00:08:28.838 "min_cntlid": 65520, 00:08:28.838 "method": "nvmf_create_subsystem", 00:08:28.838 "req_id": 1 00:08:28.838 } 00:08:28.838 Got JSON-RPC error response 00:08:28.838 response: 00:08:28.838 { 00:08:28.838 "code": -32602, 00:08:28.838 "message": "Invalid cntlid range [65520-65519]" 00:08:28.838 }' 00:08:28.838 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:28.838 { 00:08:28.838 "nqn": "nqn.2016-06.io.spdk:cnode3924", 00:08:28.838 "min_cntlid": 65520, 00:08:28.838 "method": "nvmf_create_subsystem", 00:08:28.838 "req_id": 1 00:08:28.838 } 00:08:28.838 Got JSON-RPC error response 00:08:28.838 response: 00:08:28.838 { 00:08:28.838 "code": -32602, 00:08:28.838 "message": "Invalid cntlid range [65520-65519]" 00:08:28.838 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:28.838 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28900 -I 0 00:08:29.128 [2024-07-15 18:33:45.636585] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28900: invalid cntlid range [1-0] 00:08:29.128 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:29.128 { 00:08:29.128 "nqn": "nqn.2016-06.io.spdk:cnode28900", 00:08:29.128 "max_cntlid": 0, 00:08:29.128 "method": "nvmf_create_subsystem", 00:08:29.128 "req_id": 1 00:08:29.128 } 00:08:29.128 Got JSON-RPC error response 00:08:29.128 response: 00:08:29.128 { 00:08:29.128 "code": -32602, 00:08:29.128 "message": "Invalid cntlid range [1-0]" 00:08:29.128 }' 00:08:29.128 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:29.128 { 00:08:29.128 "nqn": "nqn.2016-06.io.spdk:cnode28900", 00:08:29.128 "max_cntlid": 0, 00:08:29.128 "method": "nvmf_create_subsystem", 00:08:29.128 "req_id": 1 00:08:29.128 } 00:08:29.128 Got JSON-RPC error response 00:08:29.128 response: 00:08:29.128 { 00:08:29.128 "code": -32602, 00:08:29.128 "message": "Invalid cntlid range [1-0]" 00:08:29.128 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:29.128 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25379 -I 65520 00:08:29.129 [2024-07-15 18:33:45.821202] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25379: invalid cntlid range [1-65520] 00:08:29.387 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:29.387 { 00:08:29.387 "nqn": "nqn.2016-06.io.spdk:cnode25379", 00:08:29.387 "max_cntlid": 65520, 00:08:29.387 "method": "nvmf_create_subsystem", 00:08:29.387 "req_id": 1 00:08:29.387 } 00:08:29.387 Got JSON-RPC error response 00:08:29.387 response: 00:08:29.387 { 00:08:29.387 "code": -32602, 00:08:29.387 "message": "Invalid cntlid range [1-65520]" 00:08:29.387 }' 00:08:29.387 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:29.387 { 00:08:29.387 "nqn": "nqn.2016-06.io.spdk:cnode25379", 00:08:29.387 "max_cntlid": 65520, 00:08:29.387 "method": "nvmf_create_subsystem", 00:08:29.387 "req_id": 1 00:08:29.387 } 00:08:29.387 Got JSON-RPC error response 00:08:29.387 response: 00:08:29.387 { 00:08:29.387 "code": -32602, 00:08:29.387 "message": "Invalid cntlid range [1-65520]" 00:08:29.387 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:29.387 18:33:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31814 -i 6 -I 5 00:08:29.387 [2024-07-15 18:33:46.009851] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31814: invalid cntlid range [6-5] 00:08:29.387 18:33:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:29.387 { 00:08:29.387 "nqn": "nqn.2016-06.io.spdk:cnode31814", 00:08:29.387 "min_cntlid": 6, 00:08:29.387 "max_cntlid": 5, 00:08:29.387 "method": "nvmf_create_subsystem", 00:08:29.387 "req_id": 1 00:08:29.387 } 00:08:29.387 Got JSON-RPC error response 00:08:29.387 response: 00:08:29.387 { 00:08:29.387 "code": -32602, 00:08:29.387 "message": "Invalid cntlid range [6-5]" 00:08:29.387 }' 00:08:29.387 18:33:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:29.387 { 00:08:29.387 "nqn": "nqn.2016-06.io.spdk:cnode31814", 00:08:29.387 "min_cntlid": 6, 00:08:29.387 "max_cntlid": 5, 00:08:29.387 "method": "nvmf_create_subsystem", 00:08:29.387 "req_id": 1 00:08:29.387 } 00:08:29.387 Got JSON-RPC error response 00:08:29.387 response: 00:08:29.387 { 00:08:29.387 "code": -32602, 00:08:29.387 "message": "Invalid cntlid range [6-5]" 00:08:29.387 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:29.387 18:33:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:29.646 18:33:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:29.646 { 00:08:29.646 "name": "foobar", 00:08:29.646 "method": "nvmf_delete_target", 00:08:29.646 "req_id": 1 00:08:29.646 } 00:08:29.646 Got JSON-RPC error response 00:08:29.646 response: 00:08:29.646 { 00:08:29.646 "code": -32602, 00:08:29.646 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:29.646 }' 00:08:29.646 18:33:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:29.646 { 00:08:29.646 "name": "foobar", 00:08:29.646 "method": "nvmf_delete_target", 00:08:29.646 "req_id": 1 00:08:29.646 } 00:08:29.646 Got JSON-RPC error response 00:08:29.646 response: 00:08:29.646 { 00:08:29.646 "code": -32602, 00:08:29.646 "message": "The specified target doesn't exist, cannot delete it." 00:08:29.646 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:29.646 18:33:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:29.646 18:33:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:29.646 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.646 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.647 rmmod nvme_tcp 00:08:29.647 rmmod nvme_fabrics 00:08:29.647 rmmod nvme_keyring 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 972106 ']' 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 972106 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 972106 ']' 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 972106 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972106 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972106' 00:08:29.647 killing process with pid 972106 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 972106 00:08:29.647 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 972106 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.905 18:33:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.811 18:33:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.811 00:08:31.811 real 0m11.701s 00:08:31.811 user 0m19.515s 00:08:31.811 sys 0m5.027s 00:08:31.811 18:33:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.811 18:33:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:31.811 ************************************ 00:08:31.811 END TEST nvmf_invalid 00:08:31.811 ************************************ 00:08:32.072 18:33:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:32.072 18:33:48 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:32.072 18:33:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:32.072 18:33:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.072 18:33:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:32.072 ************************************ 00:08:32.072 START TEST nvmf_abort 00:08:32.072 ************************************ 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:32.072 * Looking for test storage... 00:08:32.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.072 18:33:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:37.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:37.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:37.368 Found net devices under 0000:86:00.0: cvl_0_0 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:37.368 Found net devices under 0000:86:00.1: cvl_0_1 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.368 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:08:37.368 00:08:37.368 --- 10.0.0.2 ping statistics --- 00:08:37.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.369 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:37.369 00:08:37.369 --- 10.0.0.1 ping statistics --- 00:08:37.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.369 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=976357 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 976357 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 976357 ']' 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.369 18:33:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.369 [2024-07-15 18:33:53.773820] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:08:37.369 [2024-07-15 18:33:53.773863] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.369 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.369 [2024-07-15 18:33:53.830508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.369 [2024-07-15 18:33:53.906534] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.369 [2024-07-15 18:33:53.906573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.369 [2024-07-15 18:33:53.906580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.369 [2024-07-15 18:33:53.906585] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.369 [2024-07-15 18:33:53.906590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.369 [2024-07-15 18:33:53.906702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.369 [2024-07-15 18:33:53.906798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.369 [2024-07-15 18:33:53.906799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.936 [2024-07-15 18:33:54.606360] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.936 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.196 Malloc0 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.196 Delay0 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.196 [2024-07-15 18:33:54.686213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.196 18:33:54 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:38.196 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.196 [2024-07-15 18:33:54.792982] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:40.734 Initializing NVMe Controllers 00:08:40.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:40.734 controller IO queue size 128 less than required 00:08:40.734 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:40.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:40.734 Initialization complete. Launching workers. 00:08:40.734 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42853 00:08:40.734 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42914, failed to submit 62 00:08:40.734 success 42857, unsuccess 57, failed 0 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.734 rmmod nvme_tcp 00:08:40.734 rmmod nvme_fabrics 00:08:40.734 rmmod nvme_keyring 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 976357 ']' 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 976357 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 976357 ']' 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 976357 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 976357 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 976357' 00:08:40.734 killing process with pid 976357 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 976357 00:08:40.734 18:33:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 976357 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.734 18:33:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.640 18:33:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.640 00:08:42.640 real 0m10.684s 00:08:42.640 user 0m12.796s 00:08:42.640 sys 0m4.761s 00:08:42.640 18:33:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.640 18:33:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.640 ************************************ 00:08:42.640 END TEST nvmf_abort 00:08:42.640 ************************************ 00:08:42.640 18:33:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:42.640 18:33:59 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:42.640 18:33:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.640 18:33:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.640 18:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.640 ************************************ 00:08:42.640 START TEST nvmf_ns_hotplug_stress 00:08:42.640 ************************************ 00:08:42.640 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:42.899 * Looking for test storage... 00:08:42.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.899 18:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.168 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.169 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.169 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.169 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.169 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:08:48.169 00:08:48.169 --- 10.0.0.2 ping statistics --- 00:08:48.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.169 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:48.169 00:08:48.169 --- 10.0.0.1 ping statistics --- 00:08:48.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.169 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=980239 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 980239 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 980239 ']' 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.169 18:34:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.169 [2024-07-15 18:34:04.672796] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:08:48.169 [2024-07-15 18:34:04.672843] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.169 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.169 [2024-07-15 18:34:04.730147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:48.169 [2024-07-15 18:34:04.809612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.169 [2024-07-15 18:34:04.809649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.169 [2024-07-15 18:34:04.809656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.169 [2024-07-15 18:34:04.809662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.169 [2024-07-15 18:34:04.809667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.169 [2024-07-15 18:34:04.809767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.169 [2024-07-15 18:34:04.809864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.169 [2024-07-15 18:34:04.809865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.106 [2024-07-15 18:34:05.670657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.106 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.365 18:34:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.365 [2024-07-15 18:34:06.039961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.365 18:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.624 18:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:49.883 Malloc0 00:08:49.883 18:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:50.142 Delay0 00:08:50.142 18:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.142 18:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:50.401 NULL1 00:08:50.401 18:34:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:50.660 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=980627 00:08:50.660 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:50.660 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:50.660 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.660 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.660 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.919 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:50.919 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:51.192 true 00:08:51.192 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:51.192 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.506 18:34:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.506 18:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:51.506 18:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:51.765 true 00:08:51.765 18:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:51.765 18:34:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.721 Read completed with error (sct=0, sc=11) 00:08:52.721 18:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.979 18:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:52.979 18:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:53.238 true 00:08:53.238 18:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:53.238 18:34:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.174 18:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.174 18:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:54.174 18:34:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:54.432 true 00:08:54.432 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:54.432 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.691 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.691 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:54.691 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:54.950 true 00:08:54.950 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:54.950 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.209 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.495 [2024-07-15 18:34:11.933838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.933912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.933960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.934968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.935985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.495 [2024-07-15 18:34:11.936657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.936696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.936736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.936776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.936822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.936871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.936920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.936962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.937967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.938991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.939028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.939069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.939106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.939144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.939182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.939223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.940959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.496 [2024-07-15 18:34:11.941887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.941926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.941970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.942968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.943627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.944986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.945966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.497 [2024-07-15 18:34:11.946852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.946908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.947968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.948999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.949777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.950983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.951994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.498 [2024-07-15 18:34:11.952317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.952959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.953970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.954994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.955975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.499 [2024-07-15 18:34:11.956508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.956959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.957988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.958966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.959966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.960011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.960061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.960105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.960154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.960198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.960247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.960295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.500 [2024-07-15 18:34:11.961098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.500 [2024-07-15 18:34:11.961755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.961802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.961848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.961892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.961935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.961980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.962937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:55.501 [2024-07-15 18:34:11.962978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 18:34:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:55.501 [2024-07-15 18:34:11.963320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.963974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.964701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.965998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.501 [2024-07-15 18:34:11.966474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.966998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.967921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.968980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.969966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.970711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.971556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.971610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.502 [2024-07-15 18:34:11.971660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.971709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.971757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.971802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.971845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.971893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.971937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.971989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.972996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.973985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.974976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.975983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.503 [2024-07-15 18:34:11.976593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.976963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.977983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.978973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.504 [2024-07-15 18:34:11.979482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.979998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.980991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.981037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.981083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.981125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.981169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.981217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.982996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.983967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.505 [2024-07-15 18:34:11.984362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.984962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.985627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.986967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.987986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.506 [2024-07-15 18:34:11.988680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.988727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.988772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.988817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.988860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.989963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.990964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.991619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.992991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.507 [2024-07-15 18:34:11.993843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.993885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.993925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.993964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.994969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.995997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.996985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.997963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.998003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.508 [2024-07-15 18:34:11.998048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.998998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:11.999982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.000981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.509 [2024-07-15 18:34:12.001822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.001866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.001914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.001959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.002002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.002052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.002862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.002910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.002948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.002991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.003985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.004996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.005993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.006995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.007037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.007079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.007120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.007159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.007199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.510 [2024-07-15 18:34:12.007242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.007986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.008973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.009991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.010985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.511 [2024-07-15 18:34:12.011637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.011681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.011728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.011780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.011825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.011873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.011920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.011964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.012583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.512 [2024-07-15 18:34:12.013425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.013982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.014994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.015982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.016918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.017430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.017476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.017525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.512 [2024-07-15 18:34:12.017572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.017983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.018970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.019972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.020965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.021007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.513 [2024-07-15 18:34:12.021055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.021985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.022917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.023736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.023786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.023830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.023879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.023926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.023971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.024997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.025996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.514 [2024-07-15 18:34:12.026446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.026993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.027982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.028967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.029985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.030971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.515 [2024-07-15 18:34:12.031434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.031979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.032823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.033976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.034997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.035986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.516 [2024-07-15 18:34:12.036564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.036602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.036640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.036681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.036720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.036767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.036813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.036856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.037977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.038991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.039997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.040949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.517 [2024-07-15 18:34:12.041539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.041989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.042973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.043011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.043052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.043095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.043955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.044970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.518 [2024-07-15 18:34:12.045963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.046981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.047582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.048953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.049975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.050970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.051020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.051064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.051109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.051155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.051201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.051255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.519 [2024-07-15 18:34:12.051296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.051964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.052958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.053593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.054970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.055987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.520 [2024-07-15 18:34:12.056556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.056970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.057976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.058968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.059993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.060957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.521 [2024-07-15 18:34:12.061793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.061834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.061872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.061918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.061956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.522 [2024-07-15 18:34:12.061993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.062981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.063997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.064033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.064073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.064115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.064153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.064933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.064981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.065986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.066034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.066083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.066128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.066169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.066214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.066268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.522 [2024-07-15 18:34:12.066315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.066971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.067989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.068972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.069996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.070647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.523 [2024-07-15 18:34:12.071919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.071962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.071998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.072956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.073967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.074963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.075998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.524 [2024-07-15 18:34:12.076770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.076812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.076862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.076901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.076945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.076985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.077021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.077058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.077094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.077138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.077921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.077967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.078958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.079997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.080999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.081978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.082021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.082064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.082102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.082143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.082183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.082223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.525 [2024-07-15 18:34:12.082268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.082964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.083596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.084992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.085963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.526 [2024-07-15 18:34:12.086996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.087969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.088985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.089975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.090982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.091974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.092021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.092070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.092116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.092159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.092205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.527 [2024-07-15 18:34:12.092251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.092986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.093999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.094994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.095997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.096700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.097989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.098036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.098083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.098127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.098180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.098228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.098272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.528 [2024-07-15 18:34:12.098319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.098985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.099951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.100978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.101962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.102965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.529 [2024-07-15 18:34:12.103574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.103976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.104972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.105994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.106983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.107435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.108261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.108310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.108358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.530 [2024-07-15 18:34:12.108412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.108975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.109971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.110970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.111933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.112969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.531 [2024-07-15 18:34:12.113629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.113990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.114972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.532 [2024-07-15 18:34:12.115456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.115982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.116984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.117957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.118010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.118057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.118869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.118922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.118961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.118992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.119028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.119075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.532 [2024-07-15 18:34:12.119121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.119988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.120993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.121972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.122537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.123955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.533 [2024-07-15 18:34:12.124516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.124969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.125897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.126986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.127959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.128959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.534 [2024-07-15 18:34:12.129633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.129960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.130966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.131985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.132026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.132069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.132121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.132167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.132207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.132946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.132996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 true 00:08:55.535 [2024-07-15 18:34:12.133768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.133990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.535 [2024-07-15 18:34:12.134462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.134964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.135969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.136966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.137912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.138993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.536 [2024-07-15 18:34:12.139935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.139975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.140958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.141990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.142990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.143977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.144021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.144068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.144115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.144916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.144965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.537 [2024-07-15 18:34:12.145781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.145810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.145852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.145897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.145940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.145990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.146975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.147999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.148546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.538 [2024-07-15 18:34:12.149807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.149852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.149897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.149940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.149984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.150975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.151866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.152973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.539 [2024-07-15 18:34:12.153616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.153991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.154976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.155978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 18:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:55.540 [2024-07-15 18:34:12.156501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 18:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.540 [2024-07-15 18:34:12.156869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.156960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.157999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.158977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.159015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.159063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.159104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.159142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.159186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.159230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.540 [2024-07-15 18:34:12.159272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.159997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.160962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.161978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.162983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.163957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.164001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.164051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.164095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.164138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.541 [2024-07-15 18:34:12.164182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.164540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.542 [2024-07-15 18:34:12.165357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.165970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.166968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.167958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.168992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.542 [2024-07-15 18:34:12.169649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.169692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.169732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.169774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.169815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.169863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.169914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.169957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.170969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.171015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.171059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.171864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.171914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.171955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.171994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.172972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.173975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.543 [2024-07-15 18:34:12.174432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.174986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.544 [2024-07-15 18:34:12.175419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.175811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.175848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.175890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.175928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.175969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.176972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.177987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.822 [2024-07-15 18:34:12.178952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.178999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.179959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.180993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.181540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.182972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.183999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.823 [2024-07-15 18:34:12.184369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.184954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.185982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.186958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.187974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.188911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.188963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.189963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.824 [2024-07-15 18:34:12.190007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.190989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.191967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.192970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.193963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.194662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.195542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.195590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.825 [2024-07-15 18:34:12.195632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.195994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.196991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.197977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.198984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.199025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.199065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.199110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.199154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.199677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.199737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.826 [2024-07-15 18:34:12.199783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.199829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.199875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.199932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.199973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.200964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.201996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.202979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.203995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.827 [2024-07-15 18:34:12.204459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.204502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.204550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.204587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.204629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.205973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.206976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.207975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.208888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.209989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.210022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.210060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.210102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.210139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.828 [2024-07-15 18:34:12.210177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.210965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.211988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.212985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.213995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.829 [2024-07-15 18:34:12.214874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.214912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.214951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.830 [2024-07-15 18:34:12.215816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.215872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.215916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.215962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.216984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.217985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.218963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.219965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.220011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.220055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.830 [2024-07-15 18:34:12.220101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.220995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.221988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.222995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.223974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.831 [2024-07-15 18:34:12.224710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.224745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.224784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.224822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.224857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.224900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.224945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.224989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.225355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.226979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.227984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.228836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.229789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.832 [2024-07-15 18:34:12.230659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.230979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.231993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.232981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.233985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.234993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.833 [2024-07-15 18:34:12.235346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.235795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.236969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.237970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.238963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.239956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.240974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.241019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.241060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.241097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.241140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.241188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.241240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.834 [2024-07-15 18:34:12.241282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.241967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.242974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.243995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.244981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.245018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.245061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.835 [2024-07-15 18:34:12.245098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.245995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.246035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.246086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.246128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.246172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.246212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.247960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.248969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.249963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.836 [2024-07-15 18:34:12.250581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.250623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.251993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.252960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.253751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.254966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.837 [2024-07-15 18:34:12.255940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.255986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.256830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.257966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.258992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.259935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.260748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.260797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.260842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.260885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.260931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.260981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.838 [2024-07-15 18:34:12.261654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.261697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.261749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.261789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.261838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.261881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.261922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.261968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.262984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.263979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.264995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.265997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.266037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.266079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.266122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.839 [2024-07-15 18:34:12.266168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.266995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.840 [2024-07-15 18:34:12.267926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.267976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.268967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.269804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.270955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.840 [2024-07-15 18:34:12.271663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.271996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.272997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.273963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.274998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.275997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.841 [2024-07-15 18:34:12.276937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.276983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.277952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.278983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.279959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.280478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.281987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.842 [2024-07-15 18:34:12.282431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.282990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.283888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.284989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.285992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.843 [2024-07-15 18:34:12.286844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.286889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.286934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.286980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.287992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.288983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.289961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.290009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.290055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.290117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.290163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.290212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.290935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.290980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.291989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.292031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.292068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.292111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.292154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.292191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.292238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.844 [2024-07-15 18:34:12.292284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.292982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.293969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.294998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.295984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.296557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.845 [2024-07-15 18:34:12.297851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.297897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.297940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.297982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.298967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.299988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.300917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.301958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.302954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.303000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.846 [2024-07-15 18:34:12.303048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.303965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.304963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.305971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.306984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.847 [2024-07-15 18:34:12.307404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.307993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.308993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.309993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.310477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.311981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.312984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.313023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.848 [2024-07-15 18:34:12.313065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.313999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.314977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.315956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.316959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.849 [2024-07-15 18:34:12.317221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.317992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.318039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.318083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.318133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.318180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.318232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.849 [2024-07-15 18:34:12.318280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.318964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.319976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.320418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.321971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.322994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.850 [2024-07-15 18:34:12.323836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.851 [2024-07-15 18:34:12.323876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.851 [2024-07-15 18:34:12.323915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.851 [2024-07-15 18:34:12.323953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.851 18:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.134 [2024-07-15 18:34:12.537950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.538974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.539964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.540970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.541011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.541049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.541094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.134 [2024-07-15 18:34:12.541149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.541989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.542998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.543046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.543089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.543138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.543180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.543228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.543277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.543322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.544987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.545996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.546034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.546071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.546114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.135 [2024-07-15 18:34:12.546155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.546961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:56.136 [2024-07-15 18:34:12.547241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.547695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.548965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.549995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.550977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.136 [2024-07-15 18:34:12.551355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.551971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.552965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.553661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.554960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.555966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.137 [2024-07-15 18:34:12.556643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.556681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.556728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.556773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.556818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.556871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.556915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.556961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.557975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.558990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.559977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.138 [2024-07-15 18:34:12.560920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.560961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.561994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.562962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.563992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.564033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.564073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.564112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.564150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.564194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.564963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.565997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.139 [2024-07-15 18:34:12.566489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.566986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.567960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 18:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:56.140 [2024-07-15 18:34:12.568811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.568979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 18:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:56.140 [2024-07-15 18:34:12.569112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.569957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.570609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.140 [2024-07-15 18:34:12.571910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.571952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.571992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.572955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.573996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.574951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.575986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.141 [2024-07-15 18:34:12.576723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.576769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.576816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.576859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.576905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.576952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.577000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.577048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.577093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.577892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.577935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.577980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.578983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.579998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.580965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.142 [2024-07-15 18:34:12.581930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.581977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.582960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.583976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.584974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.585984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.143 [2024-07-15 18:34:12.586641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.586691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.586739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.586784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.586825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.586868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.586911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.586958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.587513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.588981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.589961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.590982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.591894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.592432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.592480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.592520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.144 [2024-07-15 18:34:12.592559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.592995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.593987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.594956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.595980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.596987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.597033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.597075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.597116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.597157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.145 [2024-07-15 18:34:12.597195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.597242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.597286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:56.146 [2024-07-15 18:34:12.597773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.597817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.597862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.597902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.597942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.597986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.598999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.599960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.600981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.601379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.146 [2024-07-15 18:34:12.602721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.602768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.602813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.602863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.602916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.602961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.603966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.604978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.605998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.147 [2024-07-15 18:34:12.606981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.607700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.608998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.609985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.610983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.611863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.611913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.611958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.148 [2024-07-15 18:34:12.612847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.612896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.612944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.612974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.613971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.614994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.615515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.616970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.149 [2024-07-15 18:34:12.617954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.617993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.618977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.619982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.620966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.621470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.622992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.623029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.623066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.623111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.623159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.623202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.623242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.150 [2024-07-15 18:34:12.623285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.623994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.624952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.625849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.626982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.151 [2024-07-15 18:34:12.627553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.627963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.628969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.629993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.630973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.631017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.631063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.631834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.631872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.631912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.631952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.631993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.152 [2024-07-15 18:34:12.632934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.632980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.633989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.634967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.635959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.636963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.637997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.638043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.638086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.638134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.153 [2024-07-15 18:34:12.638177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.638987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.639979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.640886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.641966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.642977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.154 [2024-07-15 18:34:12.643461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.643972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.644993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.645980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.646982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.647026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.647076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:56.155 [2024-07-15 18:34:12.647877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.647928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.647976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.648023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.648067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.648112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.648156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.155 [2024-07-15 18:34:12.648198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.648990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.649997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.650984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.651991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.652963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.653008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.653047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.653085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.653127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.653168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.653212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.156 [2024-07-15 18:34:12.653266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.653961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.654977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.655994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.656967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.657435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.157 [2024-07-15 18:34:12.658776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.658815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.658854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.658894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.658931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.658978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.659997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.660961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.661901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.662964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.158 [2024-07-15 18:34:12.663867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.663905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.663946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.663986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.664980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.665968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.666967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.159 [2024-07-15 18:34:12.667884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.668980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.669986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.670969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.671990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.672982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.160 [2024-07-15 18:34:12.673685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.673733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.673778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.673823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.673869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.673916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.673960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.674954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.675991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.676956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.677984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.678404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.679199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.679242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.679284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.679327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.161 [2024-07-15 18:34:12.679374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.679978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.680998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.681963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.682792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.683987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.162 [2024-07-15 18:34:12.684403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.684997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.685939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.686994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.687981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.163 [2024-07-15 18:34:12.688943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.688982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.689758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.689801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.689843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.689877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.689919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.689955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.689998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.690968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.691966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.692962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.693959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.164 [2024-07-15 18:34:12.694003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.694991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.695520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.696965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.697999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.698991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:56.165 [2024-07-15 18:34:12.699363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.165 [2024-07-15 18:34:12.699525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.699974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.700960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.701999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.702793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.702837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.702877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.702923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.702964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.703967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.166 [2024-07-15 18:34:12.704887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.704926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.704970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.705997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.706981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.707980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.708982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.167 [2024-07-15 18:34:12.709945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.709993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.710975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.711674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.712971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.713966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.168 [2024-07-15 18:34:12.714502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.714968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.715971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.716772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.717969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.718961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.169 [2024-07-15 18:34:12.719466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.719971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.720970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.721985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.722919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.723735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.723785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.723833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.723872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.723920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.723965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.170 [2024-07-15 18:34:12.724743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.724780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.724820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.724860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.724898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.724945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.724988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.725964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.726966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.727972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.728980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.171 [2024-07-15 18:34:12.729807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.729857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.729905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.729953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.730962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.731972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.732961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.733478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 [2024-07-15 18:34:12.734338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:56.172 true 00:08:56.172 18:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:56.172 18:34:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.117 18:34:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.376 18:34:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:57.376 18:34:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:57.635 true 00:08:57.635 18:34:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:57.635 18:34:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.635 18:34:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.894 18:34:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:57.894 18:34:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:58.152 true 00:08:58.152 18:34:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:58.152 18:34:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.530 18:34:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.530 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:59.530 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:59.530 true 00:08:59.530 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:08:59.530 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.790 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.049 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:00.049 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:00.049 true 00:09:00.049 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:00.049 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.310 18:34:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.592 18:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:00.592 18:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:00.852 true 00:09:00.852 18:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:00.852 18:34:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.789 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.789 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:01.789 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:02.049 true 00:09:02.049 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:02.049 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.049 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.308 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:02.308 18:34:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:02.567 true 00:09:02.567 18:34:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:02.567 18:34:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.947 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.947 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:03.947 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:03.947 true 00:09:03.947 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:03.947 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.206 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.465 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:04.465 18:34:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:04.465 true 00:09:04.724 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:04.724 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.724 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.984 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:04.984 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:05.243 true 00:09:05.243 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:05.243 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.243 18:34:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.502 18:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:05.502 18:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:05.761 true 00:09:05.761 18:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:05.761 18:34:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.146 18:34:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.146 18:34:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:07.146 18:34:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:07.146 true 00:09:07.146 18:34:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:07.146 18:34:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.084 18:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.344 18:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:08.344 18:34:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:08.344 true 00:09:08.344 18:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:08.344 18:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.603 18:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.861 18:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:08.861 18:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:09.121 true 00:09:09.121 18:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:09.121 18:34:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.057 18:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.314 18:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:10.314 18:34:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:10.572 true 00:09:10.572 18:34:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:10.572 18:34:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.524 18:34:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.524 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:11.524 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:11.783 true 00:09:11.783 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:11.783 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.783 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.042 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:12.043 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:12.301 true 00:09:12.301 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:12.301 18:34:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.678 18:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.678 18:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:13.678 18:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:13.678 true 00:09:13.937 18:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:13.937 18:34:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.875 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.875 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:14.875 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:14.875 true 00:09:15.134 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:15.134 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.134 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.393 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:15.393 18:34:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:15.651 true 00:09:15.651 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:15.651 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.651 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.908 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:15.909 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:16.167 true 00:09:16.167 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:16.167 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.426 18:34:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.426 18:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:16.426 18:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:16.685 true 00:09:16.685 18:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:16.685 18:34:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.060 18:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.061 18:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:18.061 18:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:18.319 true 00:09:18.319 18:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:18.319 18:34:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.289 18:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.289 18:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:19.289 18:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:19.289 true 00:09:19.289 18:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:19.289 18:34:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.547 18:34:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.806 18:34:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:19.806 18:34:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:20.064 true 00:09:20.064 18:34:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:20.064 18:34:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.999 Initializing NVMe Controllers 00:09:20.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:20.999 Controller IO queue size 128, less than required. 00:09:20.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:20.999 Controller IO queue size 128, less than required. 00:09:20.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:20.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:20.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:20.999 Initialization complete. Launching workers. 00:09:20.999 ======================================================== 00:09:20.999 Latency(us) 00:09:20.999 Device Information : IOPS MiB/s Average min max 00:09:20.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2146.92 1.05 35745.56 2116.48 1023514.79 00:09:20.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15125.23 7.39 8441.60 1612.18 379745.45 00:09:20.999 ======================================================== 00:09:20.999 Total : 17272.15 8.43 11835.46 1612.18 1023514.79 00:09:20.999 00:09:21.257 18:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.257 18:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:21.257 18:34:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:21.515 true 00:09:21.515 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 980627 00:09:21.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (980627) - No such process 00:09:21.515 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 980627 00:09:21.515 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.774 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.774 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:21.774 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:21.774 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:21.774 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:21.774 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:22.032 null0 00:09:22.032 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:22.032 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:22.032 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:22.290 null1 00:09:22.290 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:22.290 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:22.290 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:22.290 null2 00:09:22.290 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:22.290 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:22.290 18:34:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:22.548 null3 00:09:22.548 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:22.548 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:22.548 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:22.807 null4 00:09:22.807 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:22.807 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:22.807 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:23.065 null5 00:09:23.065 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.065 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.065 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:23.065 null6 00:09:23.065 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.065 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.065 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:23.324 null7 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 986236 986237 986239 986241 986243 986245 986247 986249 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.324 18:34:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:23.583 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:23.841 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.841 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.841 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:23.841 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:23.842 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.101 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:24.360 18:34:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.360 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.360 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.360 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:24.618 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:24.877 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.136 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:25.394 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.394 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.394 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:25.394 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.394 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.394 18:34:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:25.394 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.653 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:25.911 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.168 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.169 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.426 18:34:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.426 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.697 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.956 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.214 rmmod nvme_tcp 00:09:27.214 rmmod nvme_fabrics 00:09:27.214 rmmod nvme_keyring 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 980239 ']' 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 980239 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 980239 ']' 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 980239 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980239 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980239' 00:09:27.214 killing process with pid 980239 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 980239 00:09:27.214 18:34:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 980239 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.488 18:34:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.023 18:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.023 00:09:30.023 real 0m46.809s 00:09:30.023 user 3m12.964s 00:09:30.023 sys 0m14.997s 00:09:30.023 18:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.023 18:34:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.023 ************************************ 00:09:30.023 END TEST nvmf_ns_hotplug_stress 00:09:30.023 ************************************ 00:09:30.023 18:34:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:30.023 18:34:46 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:30.023 18:34:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.023 18:34:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.023 18:34:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.023 ************************************ 00:09:30.023 START TEST nvmf_connect_stress 00:09:30.023 ************************************ 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:30.023 * Looking for test storage... 00:09:30.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.023 18:34:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:35.300 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:35.300 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:35.300 Found net devices under 0000:86:00.0: cvl_0_0 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:35.300 Found net devices under 0000:86:00.1: cvl_0_1 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.300 18:34:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.300 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.300 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.300 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:09:35.301 00:09:35.301 --- 10.0.0.2 ping statistics --- 00:09:35.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.301 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:09:35.301 00:09:35.301 --- 10.0.0.1 ping statistics --- 00:09:35.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.301 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=990389 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 990389 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 990389 ']' 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.301 18:34:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 [2024-07-15 18:34:51.287551] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:09:35.301 [2024-07-15 18:34:51.287594] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.301 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.301 [2024-07-15 18:34:51.343401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.301 [2024-07-15 18:34:51.421997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.301 [2024-07-15 18:34:51.422032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.301 [2024-07-15 18:34:51.422039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.301 [2024-07-15 18:34:51.422044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.301 [2024-07-15 18:34:51.422049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.301 [2024-07-15 18:34:51.422160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.301 [2024-07-15 18:34:51.422182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.301 [2024-07-15 18:34:51.422183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.560 [2024-07-15 18:34:52.126357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.560 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.561 [2024-07-15 18:34:52.159322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.561 NULL1 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=990633 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.561 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.167 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.167 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:36.167 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.167 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.167 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.426 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.426 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:36.426 18:34:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.426 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.426 18:34:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.693 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.693 18:34:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:36.693 18:34:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.693 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.693 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:36.952 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.952 18:34:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:36.952 18:34:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:36.952 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.952 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.210 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.210 18:34:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:37.210 18:34:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.210 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.210 18:34:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:37.781 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.781 18:34:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:37.781 18:34:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:37.781 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.781 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.041 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.041 18:34:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:38.041 18:34:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.041 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.041 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.300 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.300 18:34:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:38.300 18:34:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.300 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.300 18:34:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.560 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.560 18:34:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:38.560 18:34:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.560 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.560 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.818 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.818 18:34:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:38.818 18:34:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:38.818 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.818 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.386 18:34:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:39.386 18:34:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.386 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.386 18:34:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.645 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.645 18:34:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:39.645 18:34:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.645 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.645 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.904 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.904 18:34:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:39.904 18:34:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:39.904 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.904 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.162 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.162 18:34:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:40.162 18:34:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.162 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.162 18:34:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.730 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.730 18:34:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:40.730 18:34:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.730 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.730 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.989 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.989 18:34:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:40.989 18:34:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:40.989 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.989 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.248 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.248 18:34:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:41.248 18:34:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.248 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.248 18:34:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.507 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.507 18:34:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:41.507 18:34:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.507 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.507 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.766 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.766 18:34:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:41.766 18:34:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.766 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.766 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.334 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.334 18:34:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:42.334 18:34:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.334 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.334 18:34:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.599 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.599 18:34:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:42.599 18:34:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.599 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.599 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.857 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.857 18:34:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:42.857 18:34:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.857 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.857 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.117 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.117 18:34:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:43.117 18:34:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.117 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.117 18:34:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.376 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.376 18:35:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:43.376 18:35:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.376 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.376 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.943 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.943 18:35:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:43.943 18:35:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.943 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.943 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.202 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.202 18:35:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:44.202 18:35:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.202 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.202 18:35:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.461 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.461 18:35:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:44.461 18:35:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.461 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.461 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.720 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.720 18:35:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:44.720 18:35:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.720 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.720 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.287 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.287 18:35:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:45.287 18:35:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.287 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.287 18:35:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.546 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.546 18:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:45.546 18:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.546 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.546 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.806 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 990633 00:09:45.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (990633) - No such process 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 990633 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.806 rmmod nvme_tcp 00:09:45.806 rmmod nvme_fabrics 00:09:45.806 rmmod nvme_keyring 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 990389 ']' 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 990389 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 990389 ']' 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 990389 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 990389 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 990389' 00:09:45.806 killing process with pid 990389 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 990389 00:09:45.806 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 990389 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.065 18:35:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.994 18:35:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.994 00:09:47.994 real 0m18.490s 00:09:47.994 user 0m40.784s 00:09:47.994 sys 0m7.751s 00:09:47.994 18:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.994 18:35:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:47.994 ************************************ 00:09:47.994 END TEST nvmf_connect_stress 00:09:47.994 ************************************ 00:09:48.254 18:35:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:48.254 18:35:04 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:48.254 18:35:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:48.254 18:35:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.254 18:35:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.254 ************************************ 00:09:48.254 START TEST nvmf_fused_ordering 00:09:48.254 ************************************ 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:48.254 * Looking for test storage... 00:09:48.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.254 18:35:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:53.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:53.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:53.523 Found net devices under 0000:86:00.0: cvl_0_0 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:53.523 Found net devices under 0000:86:00.1: cvl_0_1 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.523 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.524 18:35:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:09:53.524 00:09:53.524 --- 10.0.0.2 ping statistics --- 00:09:53.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.524 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:09:53.524 00:09:53.524 --- 10.0.0.1 ping statistics --- 00:09:53.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.524 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=995775 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 995775 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 995775 ']' 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.524 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:53.524 [2024-07-15 18:35:10.186212] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:09:53.524 [2024-07-15 18:35:10.186258] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.524 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.783 [2024-07-15 18:35:10.242553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.783 [2024-07-15 18:35:10.321267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.783 [2024-07-15 18:35:10.321300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.783 [2024-07-15 18:35:10.321307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.783 [2024-07-15 18:35:10.321313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.783 [2024-07-15 18:35:10.321318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.783 [2024-07-15 18:35:10.321335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.416 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.416 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:54.416 18:35:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.416 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:54.416 18:35:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.416 [2024-07-15 18:35:11.027515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.416 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.417 [2024-07-15 18:35:11.043642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.417 NULL1 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.417 18:35:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:54.417 [2024-07-15 18:35:11.096962] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:09:54.417 [2024-07-15 18:35:11.096991] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995882 ] 00:09:54.677 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.936 Attached to nqn.2016-06.io.spdk:cnode1 00:09:54.936 Namespace ID: 1 size: 1GB 00:09:54.936 fused_ordering(0) 00:09:54.936 fused_ordering(1) 00:09:54.936 fused_ordering(2) 00:09:54.936 fused_ordering(3) 00:09:54.936 fused_ordering(4) 00:09:54.936 fused_ordering(5) 00:09:54.936 fused_ordering(6) 00:09:54.936 fused_ordering(7) 00:09:54.936 fused_ordering(8) 00:09:54.936 fused_ordering(9) 00:09:54.936 fused_ordering(10) 00:09:54.936 fused_ordering(11) 00:09:54.936 fused_ordering(12) 00:09:54.936 fused_ordering(13) 00:09:54.936 fused_ordering(14) 00:09:54.936 fused_ordering(15) 00:09:54.936 fused_ordering(16) 00:09:54.936 fused_ordering(17) 00:09:54.936 fused_ordering(18) 00:09:54.936 fused_ordering(19) 00:09:54.936 fused_ordering(20) 00:09:54.936 fused_ordering(21) 00:09:54.936 fused_ordering(22) 00:09:54.936 fused_ordering(23) 00:09:54.936 fused_ordering(24) 00:09:54.936 fused_ordering(25) 00:09:54.936 fused_ordering(26) 00:09:54.936 fused_ordering(27) 00:09:54.936 fused_ordering(28) 00:09:54.936 fused_ordering(29) 00:09:54.936 fused_ordering(30) 00:09:54.936 fused_ordering(31) 00:09:54.936 fused_ordering(32) 00:09:54.936 fused_ordering(33) 00:09:54.937 fused_ordering(34) 00:09:54.937 fused_ordering(35) 00:09:54.937 fused_ordering(36) 00:09:54.937 fused_ordering(37) 00:09:54.937 fused_ordering(38) 00:09:54.937 fused_ordering(39) 00:09:54.937 fused_ordering(40) 00:09:54.937 fused_ordering(41) 00:09:54.937 fused_ordering(42) 00:09:54.937 fused_ordering(43) 00:09:54.937 fused_ordering(44) 00:09:54.937 fused_ordering(45) 00:09:54.937 fused_ordering(46) 00:09:54.937 fused_ordering(47) 00:09:54.937 fused_ordering(48) 00:09:54.937 fused_ordering(49) 00:09:54.937 fused_ordering(50) 00:09:54.937 fused_ordering(51) 00:09:54.937 fused_ordering(52) 00:09:54.937 fused_ordering(53) 00:09:54.937 fused_ordering(54) 00:09:54.937 fused_ordering(55) 00:09:54.937 fused_ordering(56) 00:09:54.937 fused_ordering(57) 00:09:54.937 fused_ordering(58) 00:09:54.937 fused_ordering(59) 00:09:54.937 fused_ordering(60) 00:09:54.937 fused_ordering(61) 00:09:54.937 fused_ordering(62) 00:09:54.937 fused_ordering(63) 00:09:54.937 fused_ordering(64) 00:09:54.937 fused_ordering(65) 00:09:54.937 fused_ordering(66) 00:09:54.937 fused_ordering(67) 00:09:54.937 fused_ordering(68) 00:09:54.937 fused_ordering(69) 00:09:54.937 fused_ordering(70) 00:09:54.937 fused_ordering(71) 00:09:54.937 fused_ordering(72) 00:09:54.937 fused_ordering(73) 00:09:54.937 fused_ordering(74) 00:09:54.937 fused_ordering(75) 00:09:54.937 fused_ordering(76) 00:09:54.937 fused_ordering(77) 00:09:54.937 fused_ordering(78) 00:09:54.937 fused_ordering(79) 00:09:54.937 fused_ordering(80) 00:09:54.937 fused_ordering(81) 00:09:54.937 fused_ordering(82) 00:09:54.937 fused_ordering(83) 00:09:54.937 fused_ordering(84) 00:09:54.937 fused_ordering(85) 00:09:54.937 fused_ordering(86) 00:09:54.937 fused_ordering(87) 00:09:54.937 fused_ordering(88) 00:09:54.937 fused_ordering(89) 00:09:54.937 fused_ordering(90) 00:09:54.937 fused_ordering(91) 00:09:54.937 fused_ordering(92) 00:09:54.937 fused_ordering(93) 00:09:54.937 fused_ordering(94) 00:09:54.937 fused_ordering(95) 00:09:54.937 fused_ordering(96) 00:09:54.937 fused_ordering(97) 00:09:54.937 fused_ordering(98) 00:09:54.937 fused_ordering(99) 00:09:54.937 fused_ordering(100) 00:09:54.937 fused_ordering(101) 00:09:54.937 fused_ordering(102) 00:09:54.937 fused_ordering(103) 00:09:54.937 fused_ordering(104) 00:09:54.937 fused_ordering(105) 00:09:54.937 fused_ordering(106) 00:09:54.937 fused_ordering(107) 00:09:54.937 fused_ordering(108) 00:09:54.937 fused_ordering(109) 00:09:54.937 fused_ordering(110) 00:09:54.937 fused_ordering(111) 00:09:54.937 fused_ordering(112) 00:09:54.937 fused_ordering(113) 00:09:54.937 fused_ordering(114) 00:09:54.937 fused_ordering(115) 00:09:54.937 fused_ordering(116) 00:09:54.937 fused_ordering(117) 00:09:54.937 fused_ordering(118) 00:09:54.937 fused_ordering(119) 00:09:54.937 fused_ordering(120) 00:09:54.937 fused_ordering(121) 00:09:54.937 fused_ordering(122) 00:09:54.937 fused_ordering(123) 00:09:54.937 fused_ordering(124) 00:09:54.937 fused_ordering(125) 00:09:54.937 fused_ordering(126) 00:09:54.937 fused_ordering(127) 00:09:54.937 fused_ordering(128) 00:09:54.937 fused_ordering(129) 00:09:54.937 fused_ordering(130) 00:09:54.937 fused_ordering(131) 00:09:54.937 fused_ordering(132) 00:09:54.937 fused_ordering(133) 00:09:54.937 fused_ordering(134) 00:09:54.937 fused_ordering(135) 00:09:54.937 fused_ordering(136) 00:09:54.937 fused_ordering(137) 00:09:54.937 fused_ordering(138) 00:09:54.937 fused_ordering(139) 00:09:54.937 fused_ordering(140) 00:09:54.937 fused_ordering(141) 00:09:54.937 fused_ordering(142) 00:09:54.937 fused_ordering(143) 00:09:54.937 fused_ordering(144) 00:09:54.937 fused_ordering(145) 00:09:54.937 fused_ordering(146) 00:09:54.937 fused_ordering(147) 00:09:54.937 fused_ordering(148) 00:09:54.937 fused_ordering(149) 00:09:54.937 fused_ordering(150) 00:09:54.937 fused_ordering(151) 00:09:54.937 fused_ordering(152) 00:09:54.937 fused_ordering(153) 00:09:54.937 fused_ordering(154) 00:09:54.937 fused_ordering(155) 00:09:54.937 fused_ordering(156) 00:09:54.937 fused_ordering(157) 00:09:54.937 fused_ordering(158) 00:09:54.937 fused_ordering(159) 00:09:54.937 fused_ordering(160) 00:09:54.937 fused_ordering(161) 00:09:54.937 fused_ordering(162) 00:09:54.937 fused_ordering(163) 00:09:54.937 fused_ordering(164) 00:09:54.937 fused_ordering(165) 00:09:54.937 fused_ordering(166) 00:09:54.937 fused_ordering(167) 00:09:54.937 fused_ordering(168) 00:09:54.937 fused_ordering(169) 00:09:54.937 fused_ordering(170) 00:09:54.937 fused_ordering(171) 00:09:54.937 fused_ordering(172) 00:09:54.937 fused_ordering(173) 00:09:54.937 fused_ordering(174) 00:09:54.937 fused_ordering(175) 00:09:54.937 fused_ordering(176) 00:09:54.937 fused_ordering(177) 00:09:54.937 fused_ordering(178) 00:09:54.937 fused_ordering(179) 00:09:54.937 fused_ordering(180) 00:09:54.937 fused_ordering(181) 00:09:54.937 fused_ordering(182) 00:09:54.937 fused_ordering(183) 00:09:54.937 fused_ordering(184) 00:09:54.937 fused_ordering(185) 00:09:54.937 fused_ordering(186) 00:09:54.937 fused_ordering(187) 00:09:54.937 fused_ordering(188) 00:09:54.937 fused_ordering(189) 00:09:54.937 fused_ordering(190) 00:09:54.937 fused_ordering(191) 00:09:54.937 fused_ordering(192) 00:09:54.937 fused_ordering(193) 00:09:54.937 fused_ordering(194) 00:09:54.937 fused_ordering(195) 00:09:54.937 fused_ordering(196) 00:09:54.937 fused_ordering(197) 00:09:54.937 fused_ordering(198) 00:09:54.937 fused_ordering(199) 00:09:54.937 fused_ordering(200) 00:09:54.937 fused_ordering(201) 00:09:54.937 fused_ordering(202) 00:09:54.937 fused_ordering(203) 00:09:54.937 fused_ordering(204) 00:09:54.937 fused_ordering(205) 00:09:55.196 fused_ordering(206) 00:09:55.196 fused_ordering(207) 00:09:55.196 fused_ordering(208) 00:09:55.196 fused_ordering(209) 00:09:55.196 fused_ordering(210) 00:09:55.196 fused_ordering(211) 00:09:55.196 fused_ordering(212) 00:09:55.196 fused_ordering(213) 00:09:55.196 fused_ordering(214) 00:09:55.196 fused_ordering(215) 00:09:55.196 fused_ordering(216) 00:09:55.196 fused_ordering(217) 00:09:55.196 fused_ordering(218) 00:09:55.196 fused_ordering(219) 00:09:55.196 fused_ordering(220) 00:09:55.196 fused_ordering(221) 00:09:55.196 fused_ordering(222) 00:09:55.196 fused_ordering(223) 00:09:55.196 fused_ordering(224) 00:09:55.196 fused_ordering(225) 00:09:55.196 fused_ordering(226) 00:09:55.196 fused_ordering(227) 00:09:55.196 fused_ordering(228) 00:09:55.196 fused_ordering(229) 00:09:55.196 fused_ordering(230) 00:09:55.196 fused_ordering(231) 00:09:55.196 fused_ordering(232) 00:09:55.196 fused_ordering(233) 00:09:55.196 fused_ordering(234) 00:09:55.196 fused_ordering(235) 00:09:55.196 fused_ordering(236) 00:09:55.196 fused_ordering(237) 00:09:55.196 fused_ordering(238) 00:09:55.196 fused_ordering(239) 00:09:55.196 fused_ordering(240) 00:09:55.196 fused_ordering(241) 00:09:55.196 fused_ordering(242) 00:09:55.196 fused_ordering(243) 00:09:55.196 fused_ordering(244) 00:09:55.196 fused_ordering(245) 00:09:55.196 fused_ordering(246) 00:09:55.196 fused_ordering(247) 00:09:55.196 fused_ordering(248) 00:09:55.196 fused_ordering(249) 00:09:55.196 fused_ordering(250) 00:09:55.196 fused_ordering(251) 00:09:55.196 fused_ordering(252) 00:09:55.196 fused_ordering(253) 00:09:55.196 fused_ordering(254) 00:09:55.196 fused_ordering(255) 00:09:55.196 fused_ordering(256) 00:09:55.196 fused_ordering(257) 00:09:55.196 fused_ordering(258) 00:09:55.196 fused_ordering(259) 00:09:55.196 fused_ordering(260) 00:09:55.196 fused_ordering(261) 00:09:55.196 fused_ordering(262) 00:09:55.196 fused_ordering(263) 00:09:55.196 fused_ordering(264) 00:09:55.196 fused_ordering(265) 00:09:55.196 fused_ordering(266) 00:09:55.196 fused_ordering(267) 00:09:55.196 fused_ordering(268) 00:09:55.196 fused_ordering(269) 00:09:55.196 fused_ordering(270) 00:09:55.196 fused_ordering(271) 00:09:55.196 fused_ordering(272) 00:09:55.196 fused_ordering(273) 00:09:55.196 fused_ordering(274) 00:09:55.196 fused_ordering(275) 00:09:55.196 fused_ordering(276) 00:09:55.196 fused_ordering(277) 00:09:55.196 fused_ordering(278) 00:09:55.196 fused_ordering(279) 00:09:55.196 fused_ordering(280) 00:09:55.196 fused_ordering(281) 00:09:55.196 fused_ordering(282) 00:09:55.196 fused_ordering(283) 00:09:55.196 fused_ordering(284) 00:09:55.197 fused_ordering(285) 00:09:55.197 fused_ordering(286) 00:09:55.197 fused_ordering(287) 00:09:55.197 fused_ordering(288) 00:09:55.197 fused_ordering(289) 00:09:55.197 fused_ordering(290) 00:09:55.197 fused_ordering(291) 00:09:55.197 fused_ordering(292) 00:09:55.197 fused_ordering(293) 00:09:55.197 fused_ordering(294) 00:09:55.197 fused_ordering(295) 00:09:55.197 fused_ordering(296) 00:09:55.197 fused_ordering(297) 00:09:55.197 fused_ordering(298) 00:09:55.197 fused_ordering(299) 00:09:55.197 fused_ordering(300) 00:09:55.197 fused_ordering(301) 00:09:55.197 fused_ordering(302) 00:09:55.197 fused_ordering(303) 00:09:55.197 fused_ordering(304) 00:09:55.197 fused_ordering(305) 00:09:55.197 fused_ordering(306) 00:09:55.197 fused_ordering(307) 00:09:55.197 fused_ordering(308) 00:09:55.197 fused_ordering(309) 00:09:55.197 fused_ordering(310) 00:09:55.197 fused_ordering(311) 00:09:55.197 fused_ordering(312) 00:09:55.197 fused_ordering(313) 00:09:55.197 fused_ordering(314) 00:09:55.197 fused_ordering(315) 00:09:55.197 fused_ordering(316) 00:09:55.197 fused_ordering(317) 00:09:55.197 fused_ordering(318) 00:09:55.197 fused_ordering(319) 00:09:55.197 fused_ordering(320) 00:09:55.197 fused_ordering(321) 00:09:55.197 fused_ordering(322) 00:09:55.197 fused_ordering(323) 00:09:55.197 fused_ordering(324) 00:09:55.197 fused_ordering(325) 00:09:55.197 fused_ordering(326) 00:09:55.197 fused_ordering(327) 00:09:55.197 fused_ordering(328) 00:09:55.197 fused_ordering(329) 00:09:55.197 fused_ordering(330) 00:09:55.197 fused_ordering(331) 00:09:55.197 fused_ordering(332) 00:09:55.197 fused_ordering(333) 00:09:55.197 fused_ordering(334) 00:09:55.197 fused_ordering(335) 00:09:55.197 fused_ordering(336) 00:09:55.197 fused_ordering(337) 00:09:55.197 fused_ordering(338) 00:09:55.197 fused_ordering(339) 00:09:55.197 fused_ordering(340) 00:09:55.197 fused_ordering(341) 00:09:55.197 fused_ordering(342) 00:09:55.197 fused_ordering(343) 00:09:55.197 fused_ordering(344) 00:09:55.197 fused_ordering(345) 00:09:55.197 fused_ordering(346) 00:09:55.197 fused_ordering(347) 00:09:55.197 fused_ordering(348) 00:09:55.197 fused_ordering(349) 00:09:55.197 fused_ordering(350) 00:09:55.197 fused_ordering(351) 00:09:55.197 fused_ordering(352) 00:09:55.197 fused_ordering(353) 00:09:55.197 fused_ordering(354) 00:09:55.197 fused_ordering(355) 00:09:55.197 fused_ordering(356) 00:09:55.197 fused_ordering(357) 00:09:55.197 fused_ordering(358) 00:09:55.197 fused_ordering(359) 00:09:55.197 fused_ordering(360) 00:09:55.197 fused_ordering(361) 00:09:55.197 fused_ordering(362) 00:09:55.197 fused_ordering(363) 00:09:55.197 fused_ordering(364) 00:09:55.197 fused_ordering(365) 00:09:55.197 fused_ordering(366) 00:09:55.197 fused_ordering(367) 00:09:55.197 fused_ordering(368) 00:09:55.197 fused_ordering(369) 00:09:55.197 fused_ordering(370) 00:09:55.197 fused_ordering(371) 00:09:55.197 fused_ordering(372) 00:09:55.197 fused_ordering(373) 00:09:55.197 fused_ordering(374) 00:09:55.197 fused_ordering(375) 00:09:55.197 fused_ordering(376) 00:09:55.197 fused_ordering(377) 00:09:55.197 fused_ordering(378) 00:09:55.197 fused_ordering(379) 00:09:55.197 fused_ordering(380) 00:09:55.197 fused_ordering(381) 00:09:55.197 fused_ordering(382) 00:09:55.197 fused_ordering(383) 00:09:55.197 fused_ordering(384) 00:09:55.197 fused_ordering(385) 00:09:55.197 fused_ordering(386) 00:09:55.197 fused_ordering(387) 00:09:55.197 fused_ordering(388) 00:09:55.197 fused_ordering(389) 00:09:55.197 fused_ordering(390) 00:09:55.197 fused_ordering(391) 00:09:55.197 fused_ordering(392) 00:09:55.197 fused_ordering(393) 00:09:55.197 fused_ordering(394) 00:09:55.197 fused_ordering(395) 00:09:55.197 fused_ordering(396) 00:09:55.197 fused_ordering(397) 00:09:55.197 fused_ordering(398) 00:09:55.197 fused_ordering(399) 00:09:55.197 fused_ordering(400) 00:09:55.197 fused_ordering(401) 00:09:55.197 fused_ordering(402) 00:09:55.197 fused_ordering(403) 00:09:55.197 fused_ordering(404) 00:09:55.197 fused_ordering(405) 00:09:55.197 fused_ordering(406) 00:09:55.197 fused_ordering(407) 00:09:55.197 fused_ordering(408) 00:09:55.197 fused_ordering(409) 00:09:55.197 fused_ordering(410) 00:09:55.457 fused_ordering(411) 00:09:55.457 fused_ordering(412) 00:09:55.457 fused_ordering(413) 00:09:55.457 fused_ordering(414) 00:09:55.457 fused_ordering(415) 00:09:55.457 fused_ordering(416) 00:09:55.457 fused_ordering(417) 00:09:55.457 fused_ordering(418) 00:09:55.457 fused_ordering(419) 00:09:55.457 fused_ordering(420) 00:09:55.457 fused_ordering(421) 00:09:55.457 fused_ordering(422) 00:09:55.457 fused_ordering(423) 00:09:55.457 fused_ordering(424) 00:09:55.457 fused_ordering(425) 00:09:55.457 fused_ordering(426) 00:09:55.457 fused_ordering(427) 00:09:55.457 fused_ordering(428) 00:09:55.457 fused_ordering(429) 00:09:55.457 fused_ordering(430) 00:09:55.457 fused_ordering(431) 00:09:55.457 fused_ordering(432) 00:09:55.457 fused_ordering(433) 00:09:55.457 fused_ordering(434) 00:09:55.457 fused_ordering(435) 00:09:55.457 fused_ordering(436) 00:09:55.457 fused_ordering(437) 00:09:55.457 fused_ordering(438) 00:09:55.457 fused_ordering(439) 00:09:55.457 fused_ordering(440) 00:09:55.457 fused_ordering(441) 00:09:55.457 fused_ordering(442) 00:09:55.457 fused_ordering(443) 00:09:55.457 fused_ordering(444) 00:09:55.457 fused_ordering(445) 00:09:55.457 fused_ordering(446) 00:09:55.457 fused_ordering(447) 00:09:55.457 fused_ordering(448) 00:09:55.457 fused_ordering(449) 00:09:55.457 fused_ordering(450) 00:09:55.457 fused_ordering(451) 00:09:55.457 fused_ordering(452) 00:09:55.457 fused_ordering(453) 00:09:55.457 fused_ordering(454) 00:09:55.457 fused_ordering(455) 00:09:55.457 fused_ordering(456) 00:09:55.457 fused_ordering(457) 00:09:55.457 fused_ordering(458) 00:09:55.457 fused_ordering(459) 00:09:55.457 fused_ordering(460) 00:09:55.457 fused_ordering(461) 00:09:55.457 fused_ordering(462) 00:09:55.457 fused_ordering(463) 00:09:55.457 fused_ordering(464) 00:09:55.457 fused_ordering(465) 00:09:55.457 fused_ordering(466) 00:09:55.457 fused_ordering(467) 00:09:55.457 fused_ordering(468) 00:09:55.457 fused_ordering(469) 00:09:55.457 fused_ordering(470) 00:09:55.457 fused_ordering(471) 00:09:55.457 fused_ordering(472) 00:09:55.457 fused_ordering(473) 00:09:55.457 fused_ordering(474) 00:09:55.457 fused_ordering(475) 00:09:55.457 fused_ordering(476) 00:09:55.457 fused_ordering(477) 00:09:55.457 fused_ordering(478) 00:09:55.457 fused_ordering(479) 00:09:55.457 fused_ordering(480) 00:09:55.457 fused_ordering(481) 00:09:55.457 fused_ordering(482) 00:09:55.457 fused_ordering(483) 00:09:55.457 fused_ordering(484) 00:09:55.457 fused_ordering(485) 00:09:55.457 fused_ordering(486) 00:09:55.457 fused_ordering(487) 00:09:55.457 fused_ordering(488) 00:09:55.457 fused_ordering(489) 00:09:55.457 fused_ordering(490) 00:09:55.457 fused_ordering(491) 00:09:55.457 fused_ordering(492) 00:09:55.457 fused_ordering(493) 00:09:55.457 fused_ordering(494) 00:09:55.457 fused_ordering(495) 00:09:55.457 fused_ordering(496) 00:09:55.457 fused_ordering(497) 00:09:55.457 fused_ordering(498) 00:09:55.457 fused_ordering(499) 00:09:55.457 fused_ordering(500) 00:09:55.457 fused_ordering(501) 00:09:55.457 fused_ordering(502) 00:09:55.457 fused_ordering(503) 00:09:55.457 fused_ordering(504) 00:09:55.457 fused_ordering(505) 00:09:55.457 fused_ordering(506) 00:09:55.457 fused_ordering(507) 00:09:55.457 fused_ordering(508) 00:09:55.457 fused_ordering(509) 00:09:55.457 fused_ordering(510) 00:09:55.457 fused_ordering(511) 00:09:55.457 fused_ordering(512) 00:09:55.457 fused_ordering(513) 00:09:55.457 fused_ordering(514) 00:09:55.457 fused_ordering(515) 00:09:55.457 fused_ordering(516) 00:09:55.457 fused_ordering(517) 00:09:55.457 fused_ordering(518) 00:09:55.457 fused_ordering(519) 00:09:55.457 fused_ordering(520) 00:09:55.457 fused_ordering(521) 00:09:55.457 fused_ordering(522) 00:09:55.457 fused_ordering(523) 00:09:55.457 fused_ordering(524) 00:09:55.457 fused_ordering(525) 00:09:55.457 fused_ordering(526) 00:09:55.457 fused_ordering(527) 00:09:55.457 fused_ordering(528) 00:09:55.457 fused_ordering(529) 00:09:55.457 fused_ordering(530) 00:09:55.457 fused_ordering(531) 00:09:55.457 fused_ordering(532) 00:09:55.457 fused_ordering(533) 00:09:55.457 fused_ordering(534) 00:09:55.457 fused_ordering(535) 00:09:55.457 fused_ordering(536) 00:09:55.457 fused_ordering(537) 00:09:55.457 fused_ordering(538) 00:09:55.457 fused_ordering(539) 00:09:55.457 fused_ordering(540) 00:09:55.457 fused_ordering(541) 00:09:55.457 fused_ordering(542) 00:09:55.457 fused_ordering(543) 00:09:55.457 fused_ordering(544) 00:09:55.457 fused_ordering(545) 00:09:55.457 fused_ordering(546) 00:09:55.457 fused_ordering(547) 00:09:55.457 fused_ordering(548) 00:09:55.457 fused_ordering(549) 00:09:55.457 fused_ordering(550) 00:09:55.457 fused_ordering(551) 00:09:55.457 fused_ordering(552) 00:09:55.457 fused_ordering(553) 00:09:55.457 fused_ordering(554) 00:09:55.457 fused_ordering(555) 00:09:55.457 fused_ordering(556) 00:09:55.457 fused_ordering(557) 00:09:55.457 fused_ordering(558) 00:09:55.457 fused_ordering(559) 00:09:55.457 fused_ordering(560) 00:09:55.457 fused_ordering(561) 00:09:55.457 fused_ordering(562) 00:09:55.457 fused_ordering(563) 00:09:55.457 fused_ordering(564) 00:09:55.457 fused_ordering(565) 00:09:55.457 fused_ordering(566) 00:09:55.457 fused_ordering(567) 00:09:55.457 fused_ordering(568) 00:09:55.457 fused_ordering(569) 00:09:55.457 fused_ordering(570) 00:09:55.457 fused_ordering(571) 00:09:55.457 fused_ordering(572) 00:09:55.457 fused_ordering(573) 00:09:55.457 fused_ordering(574) 00:09:55.457 fused_ordering(575) 00:09:55.457 fused_ordering(576) 00:09:55.457 fused_ordering(577) 00:09:55.457 fused_ordering(578) 00:09:55.457 fused_ordering(579) 00:09:55.457 fused_ordering(580) 00:09:55.457 fused_ordering(581) 00:09:55.457 fused_ordering(582) 00:09:55.457 fused_ordering(583) 00:09:55.457 fused_ordering(584) 00:09:55.457 fused_ordering(585) 00:09:55.457 fused_ordering(586) 00:09:55.457 fused_ordering(587) 00:09:55.457 fused_ordering(588) 00:09:55.457 fused_ordering(589) 00:09:55.457 fused_ordering(590) 00:09:55.457 fused_ordering(591) 00:09:55.457 fused_ordering(592) 00:09:55.457 fused_ordering(593) 00:09:55.457 fused_ordering(594) 00:09:55.457 fused_ordering(595) 00:09:55.457 fused_ordering(596) 00:09:55.457 fused_ordering(597) 00:09:55.457 fused_ordering(598) 00:09:55.457 fused_ordering(599) 00:09:55.457 fused_ordering(600) 00:09:55.457 fused_ordering(601) 00:09:55.457 fused_ordering(602) 00:09:55.457 fused_ordering(603) 00:09:55.457 fused_ordering(604) 00:09:55.457 fused_ordering(605) 00:09:55.457 fused_ordering(606) 00:09:55.457 fused_ordering(607) 00:09:55.458 fused_ordering(608) 00:09:55.458 fused_ordering(609) 00:09:55.458 fused_ordering(610) 00:09:55.458 fused_ordering(611) 00:09:55.458 fused_ordering(612) 00:09:55.458 fused_ordering(613) 00:09:55.458 fused_ordering(614) 00:09:55.458 fused_ordering(615) 00:09:56.027 fused_ordering(616) 00:09:56.027 fused_ordering(617) 00:09:56.027 fused_ordering(618) 00:09:56.027 fused_ordering(619) 00:09:56.027 fused_ordering(620) 00:09:56.027 fused_ordering(621) 00:09:56.027 fused_ordering(622) 00:09:56.027 fused_ordering(623) 00:09:56.027 fused_ordering(624) 00:09:56.027 fused_ordering(625) 00:09:56.027 fused_ordering(626) 00:09:56.027 fused_ordering(627) 00:09:56.027 fused_ordering(628) 00:09:56.027 fused_ordering(629) 00:09:56.027 fused_ordering(630) 00:09:56.027 fused_ordering(631) 00:09:56.027 fused_ordering(632) 00:09:56.027 fused_ordering(633) 00:09:56.027 fused_ordering(634) 00:09:56.027 fused_ordering(635) 00:09:56.027 fused_ordering(636) 00:09:56.027 fused_ordering(637) 00:09:56.027 fused_ordering(638) 00:09:56.027 fused_ordering(639) 00:09:56.027 fused_ordering(640) 00:09:56.027 fused_ordering(641) 00:09:56.027 fused_ordering(642) 00:09:56.027 fused_ordering(643) 00:09:56.027 fused_ordering(644) 00:09:56.027 fused_ordering(645) 00:09:56.027 fused_ordering(646) 00:09:56.027 fused_ordering(647) 00:09:56.027 fused_ordering(648) 00:09:56.027 fused_ordering(649) 00:09:56.027 fused_ordering(650) 00:09:56.027 fused_ordering(651) 00:09:56.027 fused_ordering(652) 00:09:56.027 fused_ordering(653) 00:09:56.027 fused_ordering(654) 00:09:56.027 fused_ordering(655) 00:09:56.027 fused_ordering(656) 00:09:56.027 fused_ordering(657) 00:09:56.027 fused_ordering(658) 00:09:56.027 fused_ordering(659) 00:09:56.027 fused_ordering(660) 00:09:56.027 fused_ordering(661) 00:09:56.027 fused_ordering(662) 00:09:56.027 fused_ordering(663) 00:09:56.027 fused_ordering(664) 00:09:56.027 fused_ordering(665) 00:09:56.027 fused_ordering(666) 00:09:56.027 fused_ordering(667) 00:09:56.027 fused_ordering(668) 00:09:56.027 fused_ordering(669) 00:09:56.027 fused_ordering(670) 00:09:56.027 fused_ordering(671) 00:09:56.027 fused_ordering(672) 00:09:56.027 fused_ordering(673) 00:09:56.027 fused_ordering(674) 00:09:56.027 fused_ordering(675) 00:09:56.027 fused_ordering(676) 00:09:56.027 fused_ordering(677) 00:09:56.027 fused_ordering(678) 00:09:56.027 fused_ordering(679) 00:09:56.027 fused_ordering(680) 00:09:56.027 fused_ordering(681) 00:09:56.027 fused_ordering(682) 00:09:56.027 fused_ordering(683) 00:09:56.027 fused_ordering(684) 00:09:56.027 fused_ordering(685) 00:09:56.027 fused_ordering(686) 00:09:56.027 fused_ordering(687) 00:09:56.027 fused_ordering(688) 00:09:56.027 fused_ordering(689) 00:09:56.027 fused_ordering(690) 00:09:56.027 fused_ordering(691) 00:09:56.027 fused_ordering(692) 00:09:56.027 fused_ordering(693) 00:09:56.027 fused_ordering(694) 00:09:56.027 fused_ordering(695) 00:09:56.027 fused_ordering(696) 00:09:56.027 fused_ordering(697) 00:09:56.027 fused_ordering(698) 00:09:56.027 fused_ordering(699) 00:09:56.027 fused_ordering(700) 00:09:56.027 fused_ordering(701) 00:09:56.027 fused_ordering(702) 00:09:56.027 fused_ordering(703) 00:09:56.027 fused_ordering(704) 00:09:56.027 fused_ordering(705) 00:09:56.027 fused_ordering(706) 00:09:56.027 fused_ordering(707) 00:09:56.027 fused_ordering(708) 00:09:56.027 fused_ordering(709) 00:09:56.027 fused_ordering(710) 00:09:56.027 fused_ordering(711) 00:09:56.027 fused_ordering(712) 00:09:56.027 fused_ordering(713) 00:09:56.027 fused_ordering(714) 00:09:56.027 fused_ordering(715) 00:09:56.027 fused_ordering(716) 00:09:56.027 fused_ordering(717) 00:09:56.027 fused_ordering(718) 00:09:56.027 fused_ordering(719) 00:09:56.027 fused_ordering(720) 00:09:56.027 fused_ordering(721) 00:09:56.027 fused_ordering(722) 00:09:56.027 fused_ordering(723) 00:09:56.027 fused_ordering(724) 00:09:56.027 fused_ordering(725) 00:09:56.027 fused_ordering(726) 00:09:56.027 fused_ordering(727) 00:09:56.027 fused_ordering(728) 00:09:56.027 fused_ordering(729) 00:09:56.027 fused_ordering(730) 00:09:56.027 fused_ordering(731) 00:09:56.027 fused_ordering(732) 00:09:56.027 fused_ordering(733) 00:09:56.027 fused_ordering(734) 00:09:56.027 fused_ordering(735) 00:09:56.027 fused_ordering(736) 00:09:56.027 fused_ordering(737) 00:09:56.027 fused_ordering(738) 00:09:56.027 fused_ordering(739) 00:09:56.027 fused_ordering(740) 00:09:56.027 fused_ordering(741) 00:09:56.027 fused_ordering(742) 00:09:56.027 fused_ordering(743) 00:09:56.027 fused_ordering(744) 00:09:56.027 fused_ordering(745) 00:09:56.027 fused_ordering(746) 00:09:56.027 fused_ordering(747) 00:09:56.027 fused_ordering(748) 00:09:56.027 fused_ordering(749) 00:09:56.027 fused_ordering(750) 00:09:56.027 fused_ordering(751) 00:09:56.027 fused_ordering(752) 00:09:56.027 fused_ordering(753) 00:09:56.027 fused_ordering(754) 00:09:56.027 fused_ordering(755) 00:09:56.027 fused_ordering(756) 00:09:56.027 fused_ordering(757) 00:09:56.027 fused_ordering(758) 00:09:56.027 fused_ordering(759) 00:09:56.027 fused_ordering(760) 00:09:56.027 fused_ordering(761) 00:09:56.027 fused_ordering(762) 00:09:56.027 fused_ordering(763) 00:09:56.027 fused_ordering(764) 00:09:56.027 fused_ordering(765) 00:09:56.027 fused_ordering(766) 00:09:56.027 fused_ordering(767) 00:09:56.027 fused_ordering(768) 00:09:56.027 fused_ordering(769) 00:09:56.027 fused_ordering(770) 00:09:56.027 fused_ordering(771) 00:09:56.027 fused_ordering(772) 00:09:56.027 fused_ordering(773) 00:09:56.027 fused_ordering(774) 00:09:56.027 fused_ordering(775) 00:09:56.027 fused_ordering(776) 00:09:56.027 fused_ordering(777) 00:09:56.027 fused_ordering(778) 00:09:56.027 fused_ordering(779) 00:09:56.027 fused_ordering(780) 00:09:56.027 fused_ordering(781) 00:09:56.027 fused_ordering(782) 00:09:56.027 fused_ordering(783) 00:09:56.027 fused_ordering(784) 00:09:56.027 fused_ordering(785) 00:09:56.027 fused_ordering(786) 00:09:56.027 fused_ordering(787) 00:09:56.027 fused_ordering(788) 00:09:56.027 fused_ordering(789) 00:09:56.027 fused_ordering(790) 00:09:56.027 fused_ordering(791) 00:09:56.027 fused_ordering(792) 00:09:56.027 fused_ordering(793) 00:09:56.027 fused_ordering(794) 00:09:56.027 fused_ordering(795) 00:09:56.027 fused_ordering(796) 00:09:56.027 fused_ordering(797) 00:09:56.027 fused_ordering(798) 00:09:56.027 fused_ordering(799) 00:09:56.027 fused_ordering(800) 00:09:56.027 fused_ordering(801) 00:09:56.027 fused_ordering(802) 00:09:56.027 fused_ordering(803) 00:09:56.027 fused_ordering(804) 00:09:56.027 fused_ordering(805) 00:09:56.027 fused_ordering(806) 00:09:56.027 fused_ordering(807) 00:09:56.027 fused_ordering(808) 00:09:56.027 fused_ordering(809) 00:09:56.027 fused_ordering(810) 00:09:56.027 fused_ordering(811) 00:09:56.027 fused_ordering(812) 00:09:56.027 fused_ordering(813) 00:09:56.027 fused_ordering(814) 00:09:56.028 fused_ordering(815) 00:09:56.028 fused_ordering(816) 00:09:56.028 fused_ordering(817) 00:09:56.028 fused_ordering(818) 00:09:56.028 fused_ordering(819) 00:09:56.028 fused_ordering(820) 00:09:56.596 fused_ordering(821) 00:09:56.596 fused_ordering(822) 00:09:56.596 fused_ordering(823) 00:09:56.596 fused_ordering(824) 00:09:56.596 fused_ordering(825) 00:09:56.596 fused_ordering(826) 00:09:56.596 fused_ordering(827) 00:09:56.596 fused_ordering(828) 00:09:56.596 fused_ordering(829) 00:09:56.596 fused_ordering(830) 00:09:56.596 fused_ordering(831) 00:09:56.596 fused_ordering(832) 00:09:56.596 fused_ordering(833) 00:09:56.596 fused_ordering(834) 00:09:56.596 fused_ordering(835) 00:09:56.596 fused_ordering(836) 00:09:56.596 fused_ordering(837) 00:09:56.596 fused_ordering(838) 00:09:56.596 fused_ordering(839) 00:09:56.596 fused_ordering(840) 00:09:56.596 fused_ordering(841) 00:09:56.596 fused_ordering(842) 00:09:56.596 fused_ordering(843) 00:09:56.596 fused_ordering(844) 00:09:56.596 fused_ordering(845) 00:09:56.596 fused_ordering(846) 00:09:56.596 fused_ordering(847) 00:09:56.596 fused_ordering(848) 00:09:56.596 fused_ordering(849) 00:09:56.596 fused_ordering(850) 00:09:56.596 fused_ordering(851) 00:09:56.596 fused_ordering(852) 00:09:56.596 fused_ordering(853) 00:09:56.596 fused_ordering(854) 00:09:56.596 fused_ordering(855) 00:09:56.596 fused_ordering(856) 00:09:56.596 fused_ordering(857) 00:09:56.596 fused_ordering(858) 00:09:56.596 fused_ordering(859) 00:09:56.596 fused_ordering(860) 00:09:56.596 fused_ordering(861) 00:09:56.596 fused_ordering(862) 00:09:56.596 fused_ordering(863) 00:09:56.596 fused_ordering(864) 00:09:56.596 fused_ordering(865) 00:09:56.596 fused_ordering(866) 00:09:56.596 fused_ordering(867) 00:09:56.596 fused_ordering(868) 00:09:56.596 fused_ordering(869) 00:09:56.596 fused_ordering(870) 00:09:56.596 fused_ordering(871) 00:09:56.596 fused_ordering(872) 00:09:56.596 fused_ordering(873) 00:09:56.596 fused_ordering(874) 00:09:56.596 fused_ordering(875) 00:09:56.596 fused_ordering(876) 00:09:56.596 fused_ordering(877) 00:09:56.596 fused_ordering(878) 00:09:56.596 fused_ordering(879) 00:09:56.596 fused_ordering(880) 00:09:56.596 fused_ordering(881) 00:09:56.596 fused_ordering(882) 00:09:56.596 fused_ordering(883) 00:09:56.596 fused_ordering(884) 00:09:56.596 fused_ordering(885) 00:09:56.596 fused_ordering(886) 00:09:56.596 fused_ordering(887) 00:09:56.596 fused_ordering(888) 00:09:56.596 fused_ordering(889) 00:09:56.596 fused_ordering(890) 00:09:56.596 fused_ordering(891) 00:09:56.596 fused_ordering(892) 00:09:56.596 fused_ordering(893) 00:09:56.596 fused_ordering(894) 00:09:56.596 fused_ordering(895) 00:09:56.596 fused_ordering(896) 00:09:56.596 fused_ordering(897) 00:09:56.596 fused_ordering(898) 00:09:56.596 fused_ordering(899) 00:09:56.596 fused_ordering(900) 00:09:56.596 fused_ordering(901) 00:09:56.596 fused_ordering(902) 00:09:56.596 fused_ordering(903) 00:09:56.596 fused_ordering(904) 00:09:56.596 fused_ordering(905) 00:09:56.596 fused_ordering(906) 00:09:56.596 fused_ordering(907) 00:09:56.596 fused_ordering(908) 00:09:56.596 fused_ordering(909) 00:09:56.596 fused_ordering(910) 00:09:56.596 fused_ordering(911) 00:09:56.596 fused_ordering(912) 00:09:56.596 fused_ordering(913) 00:09:56.596 fused_ordering(914) 00:09:56.596 fused_ordering(915) 00:09:56.596 fused_ordering(916) 00:09:56.596 fused_ordering(917) 00:09:56.596 fused_ordering(918) 00:09:56.596 fused_ordering(919) 00:09:56.596 fused_ordering(920) 00:09:56.596 fused_ordering(921) 00:09:56.596 fused_ordering(922) 00:09:56.596 fused_ordering(923) 00:09:56.596 fused_ordering(924) 00:09:56.596 fused_ordering(925) 00:09:56.596 fused_ordering(926) 00:09:56.596 fused_ordering(927) 00:09:56.596 fused_ordering(928) 00:09:56.596 fused_ordering(929) 00:09:56.596 fused_ordering(930) 00:09:56.596 fused_ordering(931) 00:09:56.596 fused_ordering(932) 00:09:56.596 fused_ordering(933) 00:09:56.597 fused_ordering(934) 00:09:56.597 fused_ordering(935) 00:09:56.597 fused_ordering(936) 00:09:56.597 fused_ordering(937) 00:09:56.597 fused_ordering(938) 00:09:56.597 fused_ordering(939) 00:09:56.597 fused_ordering(940) 00:09:56.597 fused_ordering(941) 00:09:56.597 fused_ordering(942) 00:09:56.597 fused_ordering(943) 00:09:56.597 fused_ordering(944) 00:09:56.597 fused_ordering(945) 00:09:56.597 fused_ordering(946) 00:09:56.597 fused_ordering(947) 00:09:56.597 fused_ordering(948) 00:09:56.597 fused_ordering(949) 00:09:56.597 fused_ordering(950) 00:09:56.597 fused_ordering(951) 00:09:56.597 fused_ordering(952) 00:09:56.597 fused_ordering(953) 00:09:56.597 fused_ordering(954) 00:09:56.597 fused_ordering(955) 00:09:56.597 fused_ordering(956) 00:09:56.597 fused_ordering(957) 00:09:56.597 fused_ordering(958) 00:09:56.597 fused_ordering(959) 00:09:56.597 fused_ordering(960) 00:09:56.597 fused_ordering(961) 00:09:56.597 fused_ordering(962) 00:09:56.597 fused_ordering(963) 00:09:56.597 fused_ordering(964) 00:09:56.597 fused_ordering(965) 00:09:56.597 fused_ordering(966) 00:09:56.597 fused_ordering(967) 00:09:56.597 fused_ordering(968) 00:09:56.597 fused_ordering(969) 00:09:56.597 fused_ordering(970) 00:09:56.597 fused_ordering(971) 00:09:56.597 fused_ordering(972) 00:09:56.597 fused_ordering(973) 00:09:56.597 fused_ordering(974) 00:09:56.597 fused_ordering(975) 00:09:56.597 fused_ordering(976) 00:09:56.597 fused_ordering(977) 00:09:56.597 fused_ordering(978) 00:09:56.597 fused_ordering(979) 00:09:56.597 fused_ordering(980) 00:09:56.597 fused_ordering(981) 00:09:56.597 fused_ordering(982) 00:09:56.597 fused_ordering(983) 00:09:56.597 fused_ordering(984) 00:09:56.597 fused_ordering(985) 00:09:56.597 fused_ordering(986) 00:09:56.597 fused_ordering(987) 00:09:56.597 fused_ordering(988) 00:09:56.597 fused_ordering(989) 00:09:56.597 fused_ordering(990) 00:09:56.597 fused_ordering(991) 00:09:56.597 fused_ordering(992) 00:09:56.597 fused_ordering(993) 00:09:56.597 fused_ordering(994) 00:09:56.597 fused_ordering(995) 00:09:56.597 fused_ordering(996) 00:09:56.597 fused_ordering(997) 00:09:56.597 fused_ordering(998) 00:09:56.597 fused_ordering(999) 00:09:56.597 fused_ordering(1000) 00:09:56.597 fused_ordering(1001) 00:09:56.597 fused_ordering(1002) 00:09:56.597 fused_ordering(1003) 00:09:56.597 fused_ordering(1004) 00:09:56.597 fused_ordering(1005) 00:09:56.597 fused_ordering(1006) 00:09:56.597 fused_ordering(1007) 00:09:56.597 fused_ordering(1008) 00:09:56.597 fused_ordering(1009) 00:09:56.597 fused_ordering(1010) 00:09:56.597 fused_ordering(1011) 00:09:56.597 fused_ordering(1012) 00:09:56.597 fused_ordering(1013) 00:09:56.597 fused_ordering(1014) 00:09:56.597 fused_ordering(1015) 00:09:56.597 fused_ordering(1016) 00:09:56.597 fused_ordering(1017) 00:09:56.597 fused_ordering(1018) 00:09:56.597 fused_ordering(1019) 00:09:56.597 fused_ordering(1020) 00:09:56.597 fused_ordering(1021) 00:09:56.597 fused_ordering(1022) 00:09:56.597 fused_ordering(1023) 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.597 rmmod nvme_tcp 00:09:56.597 rmmod nvme_fabrics 00:09:56.597 rmmod nvme_keyring 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 995775 ']' 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 995775 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 995775 ']' 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 995775 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 995775 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 995775' 00:09:56.597 killing process with pid 995775 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 995775 00:09:56.597 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 995775 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.856 18:35:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.761 18:35:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.761 00:09:58.761 real 0m10.627s 00:09:58.761 user 0m5.585s 00:09:58.761 sys 0m5.424s 00:09:58.761 18:35:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.761 18:35:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:58.761 ************************************ 00:09:58.761 END TEST nvmf_fused_ordering 00:09:58.761 ************************************ 00:09:58.761 18:35:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:58.761 18:35:15 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:58.761 18:35:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:58.761 18:35:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.761 18:35:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.761 ************************************ 00:09:58.761 START TEST nvmf_delete_subsystem 00:09:58.761 ************************************ 00:09:58.761 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:59.019 * Looking for test storage... 00:09:59.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.019 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.019 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.020 18:35:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:04.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:04.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:04.293 Found net devices under 0000:86:00.0: cvl_0_0 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:04.293 Found net devices under 0000:86:00.1: cvl_0_1 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:04.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:10:04.293 00:10:04.293 --- 10.0.0.2 ping statistics --- 00:10:04.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.293 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:04.293 00:10:04.293 --- 10.0.0.1 ping statistics --- 00:10:04.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.293 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.293 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=999592 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 999592 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 999592 ']' 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.294 18:35:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.294 [2024-07-15 18:35:20.706950] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:10:04.294 [2024-07-15 18:35:20.707001] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.294 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.294 [2024-07-15 18:35:20.762455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.294 [2024-07-15 18:35:20.841805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.294 [2024-07-15 18:35:20.841841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.294 [2024-07-15 18:35:20.841847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.294 [2024-07-15 18:35:20.841853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.294 [2024-07-15 18:35:20.841859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.294 [2024-07-15 18:35:20.841903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.294 [2024-07-15 18:35:20.841906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.862 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.863 [2024-07-15 18:35:21.546072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.863 [2024-07-15 18:35:21.562203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.863 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.122 NULL1 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.122 Delay0 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=999802 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:05.122 18:35:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:05.122 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.122 [2024-07-15 18:35:21.636732] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:07.027 18:35:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.027 18:35:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.027 18:35:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 [2024-07-15 18:35:23.715656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c65c0 is same with the state(5) to be set 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 starting I/O failed: -6 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 [2024-07-15 18:35:23.717029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0494000c00 is same with the state(5) to be set 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.027 Write completed with error (sct=0, sc=8) 00:10:07.027 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:07.028 Write completed with error (sct=0, sc=8) 00:10:07.028 Read completed with error (sct=0, sc=8) 00:10:08.402 [2024-07-15 18:35:24.690171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c7ac0 is same with the state(5) to be set 00:10:08.402 Read completed with error (sct=0, sc=8) 00:10:08.402 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 [2024-07-15 18:35:24.719845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f049400d2f0 is same with the state(5) to be set 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 [2024-07-15 18:35:24.720251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c6000 is same with the state(5) to be set 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 [2024-07-15 18:35:24.720394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c63e0 is same with the state(5) to be set 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 Write completed with error (sct=0, sc=8) 00:10:08.403 Read completed with error (sct=0, sc=8) 00:10:08.403 [2024-07-15 18:35:24.720524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c67a0 is same with the state(5) to be set 00:10:08.403 Initializing NVMe Controllers 00:10:08.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:08.403 Controller IO queue size 128, less than required. 00:10:08.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:08.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:08.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:08.403 Initialization complete. Launching workers. 00:10:08.403 ======================================================== 00:10:08.403 Latency(us) 00:10:08.403 Device Information : IOPS MiB/s Average min max 00:10:08.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.36 0.09 953586.25 900.06 1011704.91 00:10:08.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.05 0.07 885531.22 210.77 1011913.18 00:10:08.403 ======================================================== 00:10:08.403 Total : 337.41 0.16 922715.93 210.77 1011913.18 00:10:08.403 00:10:08.403 [2024-07-15 18:35:24.721029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c7ac0 (9): Bad file descriptor 00:10:08.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:08.403 18:35:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.403 18:35:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:08.403 18:35:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 999802 00:10:08.403 18:35:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 999802 00:10:08.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (999802) - No such process 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 999802 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 999802 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 999802 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.661 [2024-07-15 18:35:25.247131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1000486 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:08.661 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:08.661 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.661 [2024-07-15 18:35:25.308267] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:09.228 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:09.228 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:09.228 18:35:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:09.794 18:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:09.794 18:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:09.794 18:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:10.364 18:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:10.364 18:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:10.364 18:35:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:10.623 18:35:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:10.623 18:35:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:10.623 18:35:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:11.203 18:35:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:11.203 18:35:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:11.203 18:35:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:11.772 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:11.772 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:11.772 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:12.031 Initializing NVMe Controllers 00:10:12.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:12.031 Controller IO queue size 128, less than required. 00:10:12.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:12.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:12.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:12.031 Initialization complete. Launching workers. 00:10:12.031 ======================================================== 00:10:12.031 Latency(us) 00:10:12.031 Device Information : IOPS MiB/s Average min max 00:10:12.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003708.19 1000153.22 1040766.03 00:10:12.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005203.43 1000246.02 1042053.92 00:10:12.031 ======================================================== 00:10:12.031 Total : 256.00 0.12 1004455.81 1000153.22 1042053.92 00:10:12.031 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000486 00:10:12.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1000486) - No such process 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1000486 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.291 rmmod nvme_tcp 00:10:12.291 rmmod nvme_fabrics 00:10:12.291 rmmod nvme_keyring 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 999592 ']' 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 999592 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 999592 ']' 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 999592 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 999592 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 999592' 00:10:12.291 killing process with pid 999592 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 999592 00:10:12.291 18:35:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 999592 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.551 18:35:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.528 18:35:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.528 00:10:14.528 real 0m15.690s 00:10:14.528 user 0m30.042s 00:10:14.528 sys 0m4.714s 00:10:14.528 18:35:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.528 18:35:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.528 ************************************ 00:10:14.528 END TEST nvmf_delete_subsystem 00:10:14.528 ************************************ 00:10:14.528 18:35:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:14.528 18:35:31 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:14.528 18:35:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:14.528 18:35:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.528 18:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.528 ************************************ 00:10:14.528 START TEST nvmf_ns_masking 00:10:14.528 ************************************ 00:10:14.528 18:35:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:14.788 * Looking for test storage... 00:10:14.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c8e358ba-ec1a-4ecc-a74a-4aad6a221280 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c1a95c2d-39a9-4951-8e02-3d6082969166 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f3f6831e-01b2-49ce-9f52-cdc43138eab4 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.788 18:35:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:20.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:20.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:20.066 Found net devices under 0000:86:00.0: cvl_0_0 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:20.066 Found net devices under 0000:86:00.1: cvl_0_1 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.066 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:20.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:10:20.067 00:10:20.067 --- 10.0.0.2 ping statistics --- 00:10:20.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.067 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:10:20.067 00:10:20.067 --- 10.0.0.1 ping statistics --- 00:10:20.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.067 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1004483 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1004483 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1004483 ']' 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.067 18:35:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:20.067 [2024-07-15 18:35:36.692182] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:10:20.067 [2024-07-15 18:35:36.692223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.067 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.067 [2024-07-15 18:35:36.749178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.327 [2024-07-15 18:35:36.826092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.327 [2024-07-15 18:35:36.826131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.327 [2024-07-15 18:35:36.826138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.327 [2024-07-15 18:35:36.826145] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.327 [2024-07-15 18:35:36.826150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.327 [2024-07-15 18:35:36.826168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.894 18:35:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.894 18:35:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:20.894 18:35:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.894 18:35:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.894 18:35:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:20.894 18:35:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.894 18:35:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.153 [2024-07-15 18:35:37.682284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.153 18:35:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:21.153 18:35:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:21.153 18:35:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:21.412 Malloc1 00:10:21.412 18:35:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:21.412 Malloc2 00:10:21.412 18:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.671 18:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:21.931 18:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.931 [2024-07-15 18:35:38.560012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.931 18:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:21.931 18:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f3f6831e-01b2-49ce-9f52-cdc43138eab4 -a 10.0.0.2 -s 4420 -i 4 00:10:22.190 18:35:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.190 18:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:22.190 18:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.190 18:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:22.190 18:35:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:24.096 [ 0]:0x1 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:24.096 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0935c0a4bffb444a8f3c120ee787b9a0 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0935c0a4bffb444a8f3c120ee787b9a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:24.355 [ 0]:0x1 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:24.355 18:35:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:24.355 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0935c0a4bffb444a8f3c120ee787b9a0 00:10:24.355 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0935c0a4bffb444a8f3c120ee787b9a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.355 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:24.355 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:24.355 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:24.355 [ 1]:0x2 00:10:24.355 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:24.355 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:24.615 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba546463f9e74a02a8e9612e0be3664e 00:10:24.615 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba546463f9e74a02a8e9612e0be3664e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:24.615 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:24.615 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.615 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.874 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:24.874 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:24.874 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f3f6831e-01b2-49ce-9f52-cdc43138eab4 -a 10.0.0.2 -s 4420 -i 4 00:10:25.133 18:35:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:25.133 18:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:25.133 18:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.133 18:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:25.133 18:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:25.133 18:35:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:27.040 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:27.300 [ 0]:0x2 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba546463f9e74a02a8e9612e0be3664e 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba546463f9e74a02a8e9612e0be3664e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.300 18:35:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:27.559 [ 0]:0x1 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0935c0a4bffb444a8f3c120ee787b9a0 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0935c0a4bffb444a8f3c120ee787b9a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:27.559 [ 1]:0x2 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba546463f9e74a02a8e9612e0be3664e 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba546463f9e74a02a8e9612e0be3664e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.559 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:27.819 [ 0]:0x2 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba546463f9e74a02a8e9612e0be3664e 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba546463f9e74a02a8e9612e0be3664e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.819 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:28.078 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:28.078 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f3f6831e-01b2-49ce-9f52-cdc43138eab4 -a 10.0.0.2 -s 4420 -i 4 00:10:28.338 18:35:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:28.338 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:28.338 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.338 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:28.338 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:28.338 18:35:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:30.241 18:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:30.242 18:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:30.242 18:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.242 18:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:30.242 18:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.242 18:35:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:30.242 18:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:30.242 18:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:30.501 18:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:30.501 18:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:30.501 18:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:30.501 18:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:30.501 18:35:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:30.501 [ 0]:0x1 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0935c0a4bffb444a8f3c120ee787b9a0 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0935c0a4bffb444a8f3c120ee787b9a0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:30.501 [ 1]:0x2 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba546463f9e74a02a8e9612e0be3664e 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba546463f9e74a02a8e9612e0be3664e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:30.501 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:30.760 [ 0]:0x2 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:30.760 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba546463f9e74a02a8e9612e0be3664e 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba546463f9e74a02a8e9612e0be3664e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:31.019 [2024-07-15 18:35:47.625681] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:31.019 request: 00:10:31.019 { 00:10:31.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.019 "nsid": 2, 00:10:31.019 "host": "nqn.2016-06.io.spdk:host1", 00:10:31.019 "method": "nvmf_ns_remove_host", 00:10:31.019 "req_id": 1 00:10:31.019 } 00:10:31.019 Got JSON-RPC error response 00:10:31.019 response: 00:10:31.019 { 00:10:31.019 "code": -32602, 00:10:31.019 "message": "Invalid parameters" 00:10:31.019 } 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:31.019 [ 0]:0x2 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:31.019 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ba546463f9e74a02a8e9612e0be3664e 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ba546463f9e74a02a8e9612e0be3664e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1006494 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1006494 /var/tmp/host.sock 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1006494 ']' 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:31.299 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.300 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:31.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:31.300 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.300 18:35:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:31.300 [2024-07-15 18:35:47.838504] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:10:31.300 [2024-07-15 18:35:47.838550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006494 ] 00:10:31.300 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.300 [2024-07-15 18:35:47.893331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.300 [2024-07-15 18:35:47.966202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.234 18:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.234 18:35:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:32.234 18:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.234 18:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.495 18:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c8e358ba-ec1a-4ecc-a74a-4aad6a221280 00:10:32.495 18:35:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:32.495 18:35:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C8E358BAEC1A4ECCA74A4AAD6A221280 -i 00:10:32.495 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c1a95c2d-39a9-4951-8e02-3d6082969166 00:10:32.495 18:35:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:32.495 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C1A95C2D39A949518E023D6082969166 -i 00:10:32.752 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:33.011 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:33.011 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:33.011 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:33.270 nvme0n1 00:10:33.530 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:33.530 18:35:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:33.789 nvme1n2 00:10:33.789 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:33.789 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:33.789 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:33.789 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:33.789 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:34.047 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:34.047 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:34.047 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:34.047 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:34.047 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c8e358ba-ec1a-4ecc-a74a-4aad6a221280 == \c\8\e\3\5\8\b\a\-\e\c\1\a\-\4\e\c\c\-\a\7\4\a\-\4\a\a\d\6\a\2\2\1\2\8\0 ]] 00:10:34.047 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:34.047 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c1a95c2d-39a9-4951-8e02-3d6082969166 == \c\1\a\9\5\c\2\d\-\3\9\a\9\-\4\9\5\1\-\8\e\0\2\-\3\d\6\0\8\2\9\6\9\1\6\6 ]] 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1006494 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1006494 ']' 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1006494 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.306 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1006494 00:10:34.307 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:34.307 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:34.307 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1006494' 00:10:34.307 killing process with pid 1006494 00:10:34.307 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1006494 00:10:34.307 18:35:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1006494 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.875 rmmod nvme_tcp 00:10:34.875 rmmod nvme_fabrics 00:10:34.875 rmmod nvme_keyring 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1004483 ']' 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1004483 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1004483 ']' 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1004483 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1004483 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1004483' 00:10:34.875 killing process with pid 1004483 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1004483 00:10:34.875 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1004483 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.134 18:35:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.669 18:35:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:37.669 00:10:37.669 real 0m22.632s 00:10:37.669 user 0m24.507s 00:10:37.669 sys 0m6.040s 00:10:37.669 18:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.669 18:35:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:37.669 ************************************ 00:10:37.669 END TEST nvmf_ns_masking 00:10:37.669 ************************************ 00:10:37.669 18:35:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:37.669 18:35:53 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:37.669 18:35:53 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:37.669 18:35:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:37.669 18:35:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.669 18:35:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.669 ************************************ 00:10:37.669 START TEST nvmf_nvme_cli 00:10:37.669 ************************************ 00:10:37.669 18:35:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:37.669 * Looking for test storage... 00:10:37.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.669 18:35:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.669 18:35:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:37.669 18:35:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.941 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:42.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:42.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:42.942 Found net devices under 0000:86:00.0: cvl_0_0 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:42.942 Found net devices under 0000:86:00.1: cvl_0_1 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:10:42.942 00:10:42.942 --- 10.0.0.2 ping statistics --- 00:10:42.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.942 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:10:42.942 00:10:42.942 --- 10.0.0.1 ping statistics --- 00:10:42.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.942 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1010611 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1010611 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1010611 ']' 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.942 18:35:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:42.942 [2024-07-15 18:35:59.541618] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:10:42.942 [2024-07-15 18:35:59.541661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.942 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.942 [2024-07-15 18:35:59.599031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.204 [2024-07-15 18:35:59.680825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.204 [2024-07-15 18:35:59.680862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.204 [2024-07-15 18:35:59.680869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.204 [2024-07-15 18:35:59.680875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.204 [2024-07-15 18:35:59.680880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.204 [2024-07-15 18:35:59.680920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.204 [2024-07-15 18:35:59.681017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.204 [2024-07-15 18:35:59.681101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.204 [2024-07-15 18:35:59.681102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 [2024-07-15 18:36:00.405172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 Malloc0 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 Malloc1 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.772 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 [2024-07-15 18:36:00.486684] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:44.031 00:10:44.031 Discovery Log Number of Records 2, Generation counter 2 00:10:44.031 =====Discovery Log Entry 0====== 00:10:44.031 trtype: tcp 00:10:44.031 adrfam: ipv4 00:10:44.031 subtype: current discovery subsystem 00:10:44.031 treq: not required 00:10:44.031 portid: 0 00:10:44.031 trsvcid: 4420 00:10:44.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:44.031 traddr: 10.0.0.2 00:10:44.031 eflags: explicit discovery connections, duplicate discovery information 00:10:44.031 sectype: none 00:10:44.031 =====Discovery Log Entry 1====== 00:10:44.031 trtype: tcp 00:10:44.031 adrfam: ipv4 00:10:44.031 subtype: nvme subsystem 00:10:44.031 treq: not required 00:10:44.031 portid: 0 00:10:44.031 trsvcid: 4420 00:10:44.031 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:44.031 traddr: 10.0.0.2 00:10:44.031 eflags: none 00:10:44.031 sectype: none 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:44.031 18:36:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.410 18:36:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:45.410 18:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:10:45.410 18:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.410 18:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:45.410 18:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:45.410 18:36:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:47.314 /dev/nvme0n1 ]] 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.314 18:36:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:47.573 18:36:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:47.832 rmmod nvme_tcp 00:10:47.832 rmmod nvme_fabrics 00:10:47.832 rmmod nvme_keyring 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1010611 ']' 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1010611 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1010611 ']' 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1010611 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:47.832 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1010611 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1010611' 00:10:48.091 killing process with pid 1010611 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1010611 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1010611 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.091 18:36:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.636 18:36:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.636 00:10:50.636 real 0m12.939s 00:10:50.636 user 0m21.897s 00:10:50.636 sys 0m4.690s 00:10:50.636 18:36:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.636 18:36:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:50.636 ************************************ 00:10:50.636 END TEST nvmf_nvme_cli 00:10:50.636 ************************************ 00:10:50.636 18:36:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:50.636 18:36:06 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:50.636 18:36:06 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:50.636 18:36:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.636 18:36:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.636 18:36:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.636 ************************************ 00:10:50.636 START TEST nvmf_vfio_user 00:10:50.636 ************************************ 00:10:50.636 18:36:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:50.636 * Looking for test storage... 00:10:50.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.636 18:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.636 18:36:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1012142 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1012142' 00:10:50.636 Process pid: 1012142 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1012142 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1012142 ']' 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.636 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:50.636 [2024-07-15 18:36:07.064618] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:10:50.636 [2024-07-15 18:36:07.064666] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.636 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.636 [2024-07-15 18:36:07.117742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.636 [2024-07-15 18:36:07.197443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.636 [2024-07-15 18:36:07.197480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.636 [2024-07-15 18:36:07.197487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.636 [2024-07-15 18:36:07.197492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.636 [2024-07-15 18:36:07.197497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.636 [2024-07-15 18:36:07.197541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.636 [2024-07-15 18:36:07.197635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.637 [2024-07-15 18:36:07.197700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.637 [2024-07-15 18:36:07.197702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.204 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.205 18:36:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:51.205 18:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:52.608 18:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:52.608 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:52.608 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:52.608 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:52.608 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:52.608 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:52.608 Malloc1 00:10:52.878 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:52.878 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:53.142 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:53.401 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:53.401 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:53.401 18:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:53.401 Malloc2 00:10:53.401 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:53.661 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:53.920 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:54.180 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:54.180 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:54.180 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:54.180 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:54.180 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:54.180 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:54.180 [2024-07-15 18:36:10.667725] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:10:54.180 [2024-07-15 18:36:10.667758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013169 ] 00:10:54.180 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.180 [2024-07-15 18:36:10.695769] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:54.180 [2024-07-15 18:36:10.705590] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:54.180 [2024-07-15 18:36:10.705611] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f18a1ab6000 00:10:54.180 [2024-07-15 18:36:10.706598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.707585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.708591] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.709597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.710603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.711613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.712617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.713625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:54.181 [2024-07-15 18:36:10.714632] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:54.181 [2024-07-15 18:36:10.714641] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f18a1aab000 00:10:54.181 [2024-07-15 18:36:10.715586] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:54.181 [2024-07-15 18:36:10.728194] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:54.181 [2024-07-15 18:36:10.728218] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:54.181 [2024-07-15 18:36:10.733756] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:54.181 [2024-07-15 18:36:10.733795] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:54.181 [2024-07-15 18:36:10.733866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:54.181 [2024-07-15 18:36:10.733884] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:54.181 [2024-07-15 18:36:10.733889] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:54.181 [2024-07-15 18:36:10.734761] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:54.181 [2024-07-15 18:36:10.734771] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:54.181 [2024-07-15 18:36:10.734778] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:54.181 [2024-07-15 18:36:10.735769] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:54.181 [2024-07-15 18:36:10.735778] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:54.181 [2024-07-15 18:36:10.735784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:54.181 [2024-07-15 18:36:10.736777] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:54.181 [2024-07-15 18:36:10.736787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:54.181 [2024-07-15 18:36:10.737780] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:54.181 [2024-07-15 18:36:10.737789] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:54.181 [2024-07-15 18:36:10.737793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:54.181 [2024-07-15 18:36:10.737801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:54.181 [2024-07-15 18:36:10.737907] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:54.181 [2024-07-15 18:36:10.737911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:54.181 [2024-07-15 18:36:10.737916] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:54.181 [2024-07-15 18:36:10.738785] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:54.181 [2024-07-15 18:36:10.739786] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:54.181 [2024-07-15 18:36:10.740793] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:54.181 [2024-07-15 18:36:10.741796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:54.181 [2024-07-15 18:36:10.741857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:54.181 [2024-07-15 18:36:10.742806] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:54.181 [2024-07-15 18:36:10.742814] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:54.181 [2024-07-15 18:36:10.742819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.742836] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:54.181 [2024-07-15 18:36:10.742843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.742856] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:54.181 [2024-07-15 18:36:10.742861] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:54.181 [2024-07-15 18:36:10.742874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:54.181 [2024-07-15 18:36:10.742914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:54.181 [2024-07-15 18:36:10.742922] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:54.181 [2024-07-15 18:36:10.742931] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:54.181 [2024-07-15 18:36:10.742935] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:54.181 [2024-07-15 18:36:10.742939] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:54.181 [2024-07-15 18:36:10.742943] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:54.181 [2024-07-15 18:36:10.742947] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:54.181 [2024-07-15 18:36:10.742952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.742961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.742970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:54.181 [2024-07-15 18:36:10.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:54.181 [2024-07-15 18:36:10.742994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.181 [2024-07-15 18:36:10.743002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.181 [2024-07-15 18:36:10.743009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.181 [2024-07-15 18:36:10.743016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.181 [2024-07-15 18:36:10.743020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:54.181 [2024-07-15 18:36:10.743046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:54.181 [2024-07-15 18:36:10.743051] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:54.181 [2024-07-15 18:36:10.743056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:54.181 [2024-07-15 18:36:10.743082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:54.181 [2024-07-15 18:36:10.743131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743144] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:54.181 [2024-07-15 18:36:10.743148] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:54.181 [2024-07-15 18:36:10.743154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:54.181 [2024-07-15 18:36:10.743170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:54.181 [2024-07-15 18:36:10.743178] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:54.181 [2024-07-15 18:36:10.743185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:54.181 [2024-07-15 18:36:10.743200] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:54.181 [2024-07-15 18:36:10.743204] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:54.181 [2024-07-15 18:36:10.743210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743259] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:54.182 [2024-07-15 18:36:10.743263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:54.182 [2024-07-15 18:36:10.743269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743312] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743317] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:54.182 [2024-07-15 18:36:10.743321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:54.182 [2024-07-15 18:36:10.743325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:54.182 [2024-07-15 18:36:10.743341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743422] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:54.182 [2024-07-15 18:36:10.743426] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:54.182 [2024-07-15 18:36:10.743429] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:54.182 [2024-07-15 18:36:10.743432] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:54.182 [2024-07-15 18:36:10.743438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:54.182 [2024-07-15 18:36:10.743444] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:54.182 [2024-07-15 18:36:10.743448] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:54.182 [2024-07-15 18:36:10.743453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743459] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:54.182 [2024-07-15 18:36:10.743463] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:54.182 [2024-07-15 18:36:10.743469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743475] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:54.182 [2024-07-15 18:36:10.743479] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:54.182 [2024-07-15 18:36:10.743484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:54.182 [2024-07-15 18:36:10.743490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:54.182 [2024-07-15 18:36:10.743518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:54.182 ===================================================== 00:10:54.182 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:54.182 ===================================================== 00:10:54.182 Controller Capabilities/Features 00:10:54.182 ================================ 00:10:54.182 Vendor ID: 4e58 00:10:54.182 Subsystem Vendor ID: 4e58 00:10:54.182 Serial Number: SPDK1 00:10:54.182 Model Number: SPDK bdev Controller 00:10:54.182 Firmware Version: 24.09 00:10:54.182 Recommended Arb Burst: 6 00:10:54.182 IEEE OUI Identifier: 8d 6b 50 00:10:54.182 Multi-path I/O 00:10:54.182 May have multiple subsystem ports: Yes 00:10:54.182 May have multiple controllers: Yes 00:10:54.182 Associated with SR-IOV VF: No 00:10:54.182 Max Data Transfer Size: 131072 00:10:54.182 Max Number of Namespaces: 32 00:10:54.182 Max Number of I/O Queues: 127 00:10:54.182 NVMe Specification Version (VS): 1.3 00:10:54.182 NVMe Specification Version (Identify): 1.3 00:10:54.182 Maximum Queue Entries: 256 00:10:54.182 Contiguous Queues Required: Yes 00:10:54.182 Arbitration Mechanisms Supported 00:10:54.182 Weighted Round Robin: Not Supported 00:10:54.182 Vendor Specific: Not Supported 00:10:54.182 Reset Timeout: 15000 ms 00:10:54.182 Doorbell Stride: 4 bytes 00:10:54.182 NVM Subsystem Reset: Not Supported 00:10:54.182 Command Sets Supported 00:10:54.182 NVM Command Set: Supported 00:10:54.182 Boot Partition: Not Supported 00:10:54.182 Memory Page Size Minimum: 4096 bytes 00:10:54.182 Memory Page Size Maximum: 4096 bytes 00:10:54.182 Persistent Memory Region: Not Supported 00:10:54.182 Optional Asynchronous Events Supported 00:10:54.182 Namespace Attribute Notices: Supported 00:10:54.182 Firmware Activation Notices: Not Supported 00:10:54.182 ANA Change Notices: Not Supported 00:10:54.182 PLE Aggregate Log Change Notices: Not Supported 00:10:54.182 LBA Status Info Alert Notices: Not Supported 00:10:54.182 EGE Aggregate Log Change Notices: Not Supported 00:10:54.182 Normal NVM Subsystem Shutdown event: Not Supported 00:10:54.182 Zone Descriptor Change Notices: Not Supported 00:10:54.182 Discovery Log Change Notices: Not Supported 00:10:54.182 Controller Attributes 00:10:54.182 128-bit Host Identifier: Supported 00:10:54.182 Non-Operational Permissive Mode: Not Supported 00:10:54.182 NVM Sets: Not Supported 00:10:54.182 Read Recovery Levels: Not Supported 00:10:54.182 Endurance Groups: Not Supported 00:10:54.182 Predictable Latency Mode: Not Supported 00:10:54.182 Traffic Based Keep ALive: Not Supported 00:10:54.182 Namespace Granularity: Not Supported 00:10:54.182 SQ Associations: Not Supported 00:10:54.182 UUID List: Not Supported 00:10:54.182 Multi-Domain Subsystem: Not Supported 00:10:54.182 Fixed Capacity Management: Not Supported 00:10:54.182 Variable Capacity Management: Not Supported 00:10:54.182 Delete Endurance Group: Not Supported 00:10:54.182 Delete NVM Set: Not Supported 00:10:54.182 Extended LBA Formats Supported: Not Supported 00:10:54.182 Flexible Data Placement Supported: Not Supported 00:10:54.182 00:10:54.182 Controller Memory Buffer Support 00:10:54.182 ================================ 00:10:54.182 Supported: No 00:10:54.182 00:10:54.182 Persistent Memory Region Support 00:10:54.182 ================================ 00:10:54.182 Supported: No 00:10:54.182 00:10:54.182 Admin Command Set Attributes 00:10:54.182 ============================ 00:10:54.182 Security Send/Receive: Not Supported 00:10:54.182 Format NVM: Not Supported 00:10:54.182 Firmware Activate/Download: Not Supported 00:10:54.182 Namespace Management: Not Supported 00:10:54.182 Device Self-Test: Not Supported 00:10:54.182 Directives: Not Supported 00:10:54.182 NVMe-MI: Not Supported 00:10:54.182 Virtualization Management: Not Supported 00:10:54.182 Doorbell Buffer Config: Not Supported 00:10:54.182 Get LBA Status Capability: Not Supported 00:10:54.182 Command & Feature Lockdown Capability: Not Supported 00:10:54.182 Abort Command Limit: 4 00:10:54.182 Async Event Request Limit: 4 00:10:54.182 Number of Firmware Slots: N/A 00:10:54.182 Firmware Slot 1 Read-Only: N/A 00:10:54.182 Firmware Activation Without Reset: N/A 00:10:54.182 Multiple Update Detection Support: N/A 00:10:54.182 Firmware Update Granularity: No Information Provided 00:10:54.182 Per-Namespace SMART Log: No 00:10:54.182 Asymmetric Namespace Access Log Page: Not Supported 00:10:54.182 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:54.182 Command Effects Log Page: Supported 00:10:54.182 Get Log Page Extended Data: Supported 00:10:54.182 Telemetry Log Pages: Not Supported 00:10:54.182 Persistent Event Log Pages: Not Supported 00:10:54.182 Supported Log Pages Log Page: May Support 00:10:54.182 Commands Supported & Effects Log Page: Not Supported 00:10:54.182 Feature Identifiers & Effects Log Page:May Support 00:10:54.182 NVMe-MI Commands & Effects Log Page: May Support 00:10:54.182 Data Area 4 for Telemetry Log: Not Supported 00:10:54.183 Error Log Page Entries Supported: 128 00:10:54.183 Keep Alive: Supported 00:10:54.183 Keep Alive Granularity: 10000 ms 00:10:54.183 00:10:54.183 NVM Command Set Attributes 00:10:54.183 ========================== 00:10:54.183 Submission Queue Entry Size 00:10:54.183 Max: 64 00:10:54.183 Min: 64 00:10:54.183 Completion Queue Entry Size 00:10:54.183 Max: 16 00:10:54.183 Min: 16 00:10:54.183 Number of Namespaces: 32 00:10:54.183 Compare Command: Supported 00:10:54.183 Write Uncorrectable Command: Not Supported 00:10:54.183 Dataset Management Command: Supported 00:10:54.183 Write Zeroes Command: Supported 00:10:54.183 Set Features Save Field: Not Supported 00:10:54.183 Reservations: Not Supported 00:10:54.183 Timestamp: Not Supported 00:10:54.183 Copy: Supported 00:10:54.183 Volatile Write Cache: Present 00:10:54.183 Atomic Write Unit (Normal): 1 00:10:54.183 Atomic Write Unit (PFail): 1 00:10:54.183 Atomic Compare & Write Unit: 1 00:10:54.183 Fused Compare & Write: Supported 00:10:54.183 Scatter-Gather List 00:10:54.183 SGL Command Set: Supported (Dword aligned) 00:10:54.183 SGL Keyed: Not Supported 00:10:54.183 SGL Bit Bucket Descriptor: Not Supported 00:10:54.183 SGL Metadata Pointer: Not Supported 00:10:54.183 Oversized SGL: Not Supported 00:10:54.183 SGL Metadata Address: Not Supported 00:10:54.183 SGL Offset: Not Supported 00:10:54.183 Transport SGL Data Block: Not Supported 00:10:54.183 Replay Protected Memory Block: Not Supported 00:10:54.183 00:10:54.183 Firmware Slot Information 00:10:54.183 ========================= 00:10:54.183 Active slot: 1 00:10:54.183 Slot 1 Firmware Revision: 24.09 00:10:54.183 00:10:54.183 00:10:54.183 Commands Supported and Effects 00:10:54.183 ============================== 00:10:54.183 Admin Commands 00:10:54.183 -------------- 00:10:54.183 Get Log Page (02h): Supported 00:10:54.183 Identify (06h): Supported 00:10:54.183 Abort (08h): Supported 00:10:54.183 Set Features (09h): Supported 00:10:54.183 Get Features (0Ah): Supported 00:10:54.183 Asynchronous Event Request (0Ch): Supported 00:10:54.183 Keep Alive (18h): Supported 00:10:54.183 I/O Commands 00:10:54.183 ------------ 00:10:54.183 Flush (00h): Supported LBA-Change 00:10:54.183 Write (01h): Supported LBA-Change 00:10:54.183 Read (02h): Supported 00:10:54.183 Compare (05h): Supported 00:10:54.183 Write Zeroes (08h): Supported LBA-Change 00:10:54.183 Dataset Management (09h): Supported LBA-Change 00:10:54.183 Copy (19h): Supported LBA-Change 00:10:54.183 00:10:54.183 Error Log 00:10:54.183 ========= 00:10:54.183 00:10:54.183 Arbitration 00:10:54.183 =========== 00:10:54.183 Arbitration Burst: 1 00:10:54.183 00:10:54.183 Power Management 00:10:54.183 ================ 00:10:54.183 Number of Power States: 1 00:10:54.183 Current Power State: Power State #0 00:10:54.183 Power State #0: 00:10:54.183 Max Power: 0.00 W 00:10:54.183 Non-Operational State: Operational 00:10:54.183 Entry Latency: Not Reported 00:10:54.183 Exit Latency: Not Reported 00:10:54.183 Relative Read Throughput: 0 00:10:54.183 Relative Read Latency: 0 00:10:54.183 Relative Write Throughput: 0 00:10:54.183 Relative Write Latency: 0 00:10:54.183 Idle Power: Not Reported 00:10:54.183 Active Power: Not Reported 00:10:54.183 Non-Operational Permissive Mode: Not Supported 00:10:54.183 00:10:54.183 Health Information 00:10:54.183 ================== 00:10:54.183 Critical Warnings: 00:10:54.183 Available Spare Space: OK 00:10:54.183 Temperature: OK 00:10:54.183 Device Reliability: OK 00:10:54.183 Read Only: No 00:10:54.183 Volatile Memory Backup: OK 00:10:54.183 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:54.183 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:54.183 Available Spare: 0% 00:10:54.183 Available Sp[2024-07-15 18:36:10.743606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:54.183 [2024-07-15 18:36:10.743613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:54.183 [2024-07-15 18:36:10.743638] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:54.183 [2024-07-15 18:36:10.743646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.183 [2024-07-15 18:36:10.743652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.183 [2024-07-15 18:36:10.743657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.183 [2024-07-15 18:36:10.743662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.183 [2024-07-15 18:36:10.743815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:54.183 [2024-07-15 18:36:10.743824] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:54.183 [2024-07-15 18:36:10.744818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:54.183 [2024-07-15 18:36:10.744869] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:54.183 [2024-07-15 18:36:10.744876] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:54.183 [2024-07-15 18:36:10.745826] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:54.183 [2024-07-15 18:36:10.745837] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:54.183 [2024-07-15 18:36:10.745884] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:54.183 [2024-07-15 18:36:10.747858] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:54.183 are Threshold: 0% 00:10:54.183 Life Percentage Used: 0% 00:10:54.183 Data Units Read: 0 00:10:54.183 Data Units Written: 0 00:10:54.183 Host Read Commands: 0 00:10:54.183 Host Write Commands: 0 00:10:54.183 Controller Busy Time: 0 minutes 00:10:54.183 Power Cycles: 0 00:10:54.183 Power On Hours: 0 hours 00:10:54.183 Unsafe Shutdowns: 0 00:10:54.183 Unrecoverable Media Errors: 0 00:10:54.183 Lifetime Error Log Entries: 0 00:10:54.183 Warning Temperature Time: 0 minutes 00:10:54.183 Critical Temperature Time: 0 minutes 00:10:54.183 00:10:54.183 Number of Queues 00:10:54.183 ================ 00:10:54.183 Number of I/O Submission Queues: 127 00:10:54.183 Number of I/O Completion Queues: 127 00:10:54.183 00:10:54.183 Active Namespaces 00:10:54.183 ================= 00:10:54.183 Namespace ID:1 00:10:54.183 Error Recovery Timeout: Unlimited 00:10:54.183 Command Set Identifier: NVM (00h) 00:10:54.183 Deallocate: Supported 00:10:54.183 Deallocated/Unwritten Error: Not Supported 00:10:54.183 Deallocated Read Value: Unknown 00:10:54.183 Deallocate in Write Zeroes: Not Supported 00:10:54.183 Deallocated Guard Field: 0xFFFF 00:10:54.183 Flush: Supported 00:10:54.183 Reservation: Supported 00:10:54.183 Namespace Sharing Capabilities: Multiple Controllers 00:10:54.183 Size (in LBAs): 131072 (0GiB) 00:10:54.183 Capacity (in LBAs): 131072 (0GiB) 00:10:54.183 Utilization (in LBAs): 131072 (0GiB) 00:10:54.183 NGUID: 29C5AFC0E60D4579B73F9788C2B2E82A 00:10:54.183 UUID: 29c5afc0-e60d-4579-b73f-9788c2b2e82a 00:10:54.183 Thin Provisioning: Not Supported 00:10:54.183 Per-NS Atomic Units: Yes 00:10:54.183 Atomic Boundary Size (Normal): 0 00:10:54.183 Atomic Boundary Size (PFail): 0 00:10:54.183 Atomic Boundary Offset: 0 00:10:54.183 Maximum Single Source Range Length: 65535 00:10:54.183 Maximum Copy Length: 65535 00:10:54.183 Maximum Source Range Count: 1 00:10:54.183 NGUID/EUI64 Never Reused: No 00:10:54.183 Namespace Write Protected: No 00:10:54.183 Number of LBA Formats: 1 00:10:54.183 Current LBA Format: LBA Format #00 00:10:54.183 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:54.183 00:10:54.183 18:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:54.183 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.442 [2024-07-15 18:36:10.960123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:59.713 Initializing NVMe Controllers 00:10:59.713 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:59.713 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:59.713 Initialization complete. Launching workers. 00:10:59.713 ======================================================== 00:10:59.713 Latency(us) 00:10:59.713 Device Information : IOPS MiB/s Average min max 00:10:59.714 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39928.79 155.97 3205.52 947.98 6969.87 00:10:59.714 ======================================================== 00:10:59.714 Total : 39928.79 155.97 3205.52 947.98 6969.87 00:10:59.714 00:10:59.714 [2024-07-15 18:36:15.983968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:59.714 18:36:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:59.714 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.714 [2024-07-15 18:36:16.208026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:04.987 Initializing NVMe Controllers 00:11:04.987 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:04.987 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:04.987 Initialization complete. Launching workers. 00:11:04.987 ======================================================== 00:11:04.987 Latency(us) 00:11:04.987 Device Information : IOPS MiB/s Average min max 00:11:04.987 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.03 62.70 7979.90 6988.26 8971.95 00:11:04.987 ======================================================== 00:11:04.987 Total : 16051.03 62.70 7979.90 6988.26 8971.95 00:11:04.987 00:11:04.987 [2024-07-15 18:36:21.251232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:04.987 18:36:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:04.987 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.987 [2024-07-15 18:36:21.448215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:10.262 [2024-07-15 18:36:26.558705] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:10.262 Initializing NVMe Controllers 00:11:10.262 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:10.262 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:10.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:10.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:10.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:10.262 Initialization complete. Launching workers. 00:11:10.262 Starting thread on core 2 00:11:10.262 Starting thread on core 3 00:11:10.262 Starting thread on core 1 00:11:10.262 18:36:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:10.262 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.262 [2024-07-15 18:36:26.838609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:13.550 [2024-07-15 18:36:29.895147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:13.550 Initializing NVMe Controllers 00:11:13.550 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:13.550 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:13.550 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:13.550 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:13.550 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:13.550 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:13.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:13.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:13.550 Initialization complete. Launching workers. 00:11:13.550 Starting thread on core 1 with urgent priority queue 00:11:13.550 Starting thread on core 2 with urgent priority queue 00:11:13.550 Starting thread on core 3 with urgent priority queue 00:11:13.550 Starting thread on core 0 with urgent priority queue 00:11:13.550 SPDK bdev Controller (SPDK1 ) core 0: 8087.00 IO/s 12.37 secs/100000 ios 00:11:13.550 SPDK bdev Controller (SPDK1 ) core 1: 9497.00 IO/s 10.53 secs/100000 ios 00:11:13.550 SPDK bdev Controller (SPDK1 ) core 2: 10747.00 IO/s 9.30 secs/100000 ios 00:11:13.550 SPDK bdev Controller (SPDK1 ) core 3: 8083.00 IO/s 12.37 secs/100000 ios 00:11:13.550 ======================================================== 00:11:13.550 00:11:13.550 18:36:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:13.550 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.550 [2024-07-15 18:36:30.169674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:13.550 Initializing NVMe Controllers 00:11:13.550 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:13.550 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:13.550 Namespace ID: 1 size: 0GB 00:11:13.550 Initialization complete. 00:11:13.550 INFO: using host memory buffer for IO 00:11:13.550 Hello world! 00:11:13.550 [2024-07-15 18:36:30.202880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:13.550 18:36:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:13.808 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.808 [2024-07-15 18:36:30.472623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:15.186 Initializing NVMe Controllers 00:11:15.186 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:15.186 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:15.186 Initialization complete. Launching workers. 00:11:15.186 submit (in ns) avg, min, max = 7399.9, 3281.7, 4001096.5 00:11:15.186 complete (in ns) avg, min, max = 21287.7, 1802.6, 4995695.7 00:11:15.186 00:11:15.186 Submit histogram 00:11:15.186 ================ 00:11:15.186 Range in us Cumulative Count 00:11:15.186 3.270 - 3.283: 0.0123% ( 2) 00:11:15.186 3.283 - 3.297: 0.2144% ( 33) 00:11:15.186 3.297 - 3.311: 0.7902% ( 94) 00:11:15.186 3.311 - 3.325: 1.7580% ( 158) 00:11:15.186 3.325 - 3.339: 3.8775% ( 346) 00:11:15.186 3.339 - 3.353: 8.9066% ( 821) 00:11:15.186 3.353 - 3.367: 14.4257% ( 901) 00:11:15.186 3.367 - 3.381: 20.0184% ( 913) 00:11:15.186 3.381 - 3.395: 26.0827% ( 990) 00:11:15.186 3.395 - 3.409: 32.3124% ( 1017) 00:11:15.186 3.409 - 3.423: 37.7090% ( 881) 00:11:15.186 3.423 - 3.437: 43.2649% ( 907) 00:11:15.186 3.437 - 3.450: 48.7228% ( 891) 00:11:15.186 3.450 - 3.464: 53.2741% ( 743) 00:11:15.186 3.464 - 3.478: 57.2986% ( 657) 00:11:15.186 3.478 - 3.492: 63.1363% ( 953) 00:11:15.186 3.492 - 3.506: 70.0276% ( 1125) 00:11:15.186 3.506 - 3.520: 74.1746% ( 677) 00:11:15.186 3.520 - 3.534: 78.2297% ( 662) 00:11:15.186 3.534 - 3.548: 82.3155% ( 667) 00:11:15.186 3.548 - 3.562: 84.9311% ( 427) 00:11:15.186 3.562 - 3.590: 87.2894% ( 385) 00:11:15.186 3.590 - 3.617: 88.1531% ( 141) 00:11:15.186 3.617 - 3.645: 89.1271% ( 159) 00:11:15.186 3.645 - 3.673: 90.9219% ( 293) 00:11:15.186 3.673 - 3.701: 92.6003% ( 274) 00:11:15.186 3.701 - 3.729: 94.2603% ( 271) 00:11:15.186 3.729 - 3.757: 96.0184% ( 287) 00:11:15.186 3.757 - 3.784: 97.2864% ( 207) 00:11:15.186 3.784 - 3.812: 98.3461% ( 173) 00:11:15.186 3.812 - 3.840: 98.8913% ( 89) 00:11:15.186 3.840 - 3.868: 99.2343% ( 56) 00:11:15.186 3.868 - 3.896: 99.4793% ( 40) 00:11:15.186 3.896 - 3.923: 99.6018% ( 20) 00:11:15.186 3.923 - 3.951: 99.6202% ( 3) 00:11:15.186 3.951 - 3.979: 99.6263% ( 1) 00:11:15.186 3.979 - 4.007: 99.6325% ( 1) 00:11:15.186 4.063 - 4.090: 99.6386% ( 1) 00:11:15.186 5.315 - 5.343: 99.6447% ( 1) 00:11:15.186 5.370 - 5.398: 99.6508% ( 1) 00:11:15.186 5.398 - 5.426: 99.6570% ( 1) 00:11:15.186 5.454 - 5.482: 99.6631% ( 1) 00:11:15.186 5.482 - 5.510: 99.6692% ( 1) 00:11:15.186 5.510 - 5.537: 99.6753% ( 1) 00:11:15.186 5.537 - 5.565: 99.6815% ( 1) 00:11:15.186 5.593 - 5.621: 99.6876% ( 1) 00:11:15.186 5.677 - 5.704: 99.6937% ( 1) 00:11:15.186 5.760 - 5.788: 99.6998% ( 1) 00:11:15.186 5.843 - 5.871: 99.7060% ( 1) 00:11:15.186 5.955 - 5.983: 99.7121% ( 1) 00:11:15.186 5.983 - 6.010: 99.7182% ( 1) 00:11:15.186 6.094 - 6.122: 99.7243% ( 1) 00:11:15.186 6.400 - 6.428: 99.7305% ( 1) 00:11:15.186 6.678 - 6.706: 99.7366% ( 1) 00:11:15.186 6.734 - 6.762: 99.7427% ( 1) 00:11:15.186 6.762 - 6.790: 99.7489% ( 1) 00:11:15.186 6.817 - 6.845: 99.7550% ( 1) 00:11:15.186 6.984 - 7.012: 99.7611% ( 1) 00:11:15.186 7.068 - 7.096: 99.7734% ( 2) 00:11:15.186 7.123 - 7.179: 99.7795% ( 1) 00:11:15.186 7.179 - 7.235: 99.7856% ( 1) 00:11:15.186 7.235 - 7.290: 99.8040% ( 3) 00:11:15.186 7.290 - 7.346: 99.8101% ( 1) 00:11:15.186 7.402 - 7.457: 99.8162% ( 1) 00:11:15.186 7.457 - 7.513: 99.8224% ( 1) 00:11:15.186 7.513 - 7.569: 99.8285% ( 1) 00:11:15.186 7.847 - 7.903: 99.8346% ( 1) 00:11:15.186 8.014 - 8.070: 99.8407% ( 1) 00:11:15.186 8.292 - 8.348: 99.8530% ( 2) 00:11:15.186 8.348 - 8.403: 99.8591% ( 1) 00:11:15.186 8.570 - 8.626: 99.8652% ( 1) 00:11:15.186 8.904 - 8.960: 99.8714% ( 1) 00:11:15.186 8.960 - 9.016: 99.8775% ( 1) 00:11:15.186 9.016 - 9.071: 99.8836% ( 1) 00:11:15.186 9.517 - 9.572: 99.8897% ( 1) 00:11:15.186 9.906 - 9.962: 99.8959% ( 1) 00:11:15.186 13.412 - 13.468: 99.9020% ( 1) 00:11:15.186 3989.148 - 4017.642: 100.0000% ( 16) 00:11:15.186 00:11:15.186 Complete histogram 00:11:15.186 ================== 00:11:15.186 Ra[2024-07-15 18:36:31.490522] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:15.186 nge in us Cumulative Count 00:11:15.186 1.795 - 1.809: 0.0061% ( 1) 00:11:15.186 1.809 - 1.823: 0.5513% ( 89) 00:11:15.186 1.823 - 1.837: 1.9418% ( 227) 00:11:15.186 1.837 - 1.850: 3.2956% ( 221) 00:11:15.186 1.850 - 1.864: 25.9724% ( 3702) 00:11:15.186 1.864 - 1.878: 81.3170% ( 9035) 00:11:15.186 1.878 - 1.892: 92.4043% ( 1810) 00:11:15.186 1.892 - 1.906: 95.5161% ( 508) 00:11:15.186 1.906 - 1.920: 96.2940% ( 127) 00:11:15.186 1.920 - 1.934: 96.9617% ( 109) 00:11:15.186 1.934 - 1.948: 98.2787% ( 215) 00:11:15.186 1.948 - 1.962: 98.9709% ( 113) 00:11:15.186 1.962 - 1.976: 99.1853% ( 35) 00:11:15.186 1.976 - 1.990: 99.2466% ( 10) 00:11:15.186 1.990 - 2.003: 99.2772% ( 5) 00:11:15.186 2.003 - 2.017: 99.2894% ( 2) 00:11:15.186 2.017 - 2.031: 99.2956% ( 1) 00:11:15.186 2.031 - 2.045: 99.3017% ( 1) 00:11:15.186 2.045 - 2.059: 99.3139% ( 2) 00:11:15.186 2.087 - 2.101: 99.3201% ( 1) 00:11:15.186 2.115 - 2.129: 99.3262% ( 1) 00:11:15.186 2.129 - 2.143: 99.3323% ( 1) 00:11:15.187 2.296 - 2.310: 99.3384% ( 1) 00:11:15.187 3.701 - 3.729: 99.3446% ( 1) 00:11:15.187 3.812 - 3.840: 99.3507% ( 1) 00:11:15.187 3.951 - 3.979: 99.3568% ( 1) 00:11:15.187 3.979 - 4.007: 99.3629% ( 1) 00:11:15.187 4.174 - 4.202: 99.3691% ( 1) 00:11:15.187 4.230 - 4.257: 99.3813% ( 2) 00:11:15.187 4.619 - 4.647: 99.3874% ( 1) 00:11:15.187 4.675 - 4.703: 99.3936% ( 1) 00:11:15.187 4.758 - 4.786: 99.3997% ( 1) 00:11:15.187 4.786 - 4.814: 99.4058% ( 1) 00:11:15.187 4.814 - 4.842: 99.4119% ( 1) 00:11:15.187 4.925 - 4.953: 99.4242% ( 2) 00:11:15.187 5.203 - 5.231: 99.4303% ( 1) 00:11:15.187 5.315 - 5.343: 99.4364% ( 1) 00:11:15.187 5.621 - 5.649: 99.4426% ( 1) 00:11:15.187 5.649 - 5.677: 99.4487% ( 1) 00:11:15.187 5.677 - 5.704: 99.4548% ( 1) 00:11:15.187 6.010 - 6.038: 99.4609% ( 1) 00:11:15.187 6.038 - 6.066: 99.4671% ( 1) 00:11:15.187 6.205 - 6.233: 99.4732% ( 1) 00:11:15.187 6.233 - 6.261: 99.4793% ( 1) 00:11:15.187 6.261 - 6.289: 99.4855% ( 1) 00:11:15.187 6.289 - 6.317: 99.4916% ( 1) 00:11:15.187 6.428 - 6.456: 99.4977% ( 1) 00:11:15.187 9.350 - 9.405: 99.5038% ( 1) 00:11:15.187 14.581 - 14.692: 99.5100% ( 1) 00:11:15.187 40.070 - 40.292: 99.5161% ( 1) 00:11:15.187 3006.108 - 3020.355: 99.5222% ( 1) 00:11:15.187 3191.318 - 3205.565: 99.5283% ( 1) 00:11:15.187 3989.148 - 4017.642: 99.9816% ( 74) 00:11:15.187 4986.435 - 5014.929: 100.0000% ( 3) 00:11:15.187 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:15.187 [ 00:11:15.187 { 00:11:15.187 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:15.187 "subtype": "Discovery", 00:11:15.187 "listen_addresses": [], 00:11:15.187 "allow_any_host": true, 00:11:15.187 "hosts": [] 00:11:15.187 }, 00:11:15.187 { 00:11:15.187 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:15.187 "subtype": "NVMe", 00:11:15.187 "listen_addresses": [ 00:11:15.187 { 00:11:15.187 "trtype": "VFIOUSER", 00:11:15.187 "adrfam": "IPv4", 00:11:15.187 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:15.187 "trsvcid": "0" 00:11:15.187 } 00:11:15.187 ], 00:11:15.187 "allow_any_host": true, 00:11:15.187 "hosts": [], 00:11:15.187 "serial_number": "SPDK1", 00:11:15.187 "model_number": "SPDK bdev Controller", 00:11:15.187 "max_namespaces": 32, 00:11:15.187 "min_cntlid": 1, 00:11:15.187 "max_cntlid": 65519, 00:11:15.187 "namespaces": [ 00:11:15.187 { 00:11:15.187 "nsid": 1, 00:11:15.187 "bdev_name": "Malloc1", 00:11:15.187 "name": "Malloc1", 00:11:15.187 "nguid": "29C5AFC0E60D4579B73F9788C2B2E82A", 00:11:15.187 "uuid": "29c5afc0-e60d-4579-b73f-9788c2b2e82a" 00:11:15.187 } 00:11:15.187 ] 00:11:15.187 }, 00:11:15.187 { 00:11:15.187 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:15.187 "subtype": "NVMe", 00:11:15.187 "listen_addresses": [ 00:11:15.187 { 00:11:15.187 "trtype": "VFIOUSER", 00:11:15.187 "adrfam": "IPv4", 00:11:15.187 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:15.187 "trsvcid": "0" 00:11:15.187 } 00:11:15.187 ], 00:11:15.187 "allow_any_host": true, 00:11:15.187 "hosts": [], 00:11:15.187 "serial_number": "SPDK2", 00:11:15.187 "model_number": "SPDK bdev Controller", 00:11:15.187 "max_namespaces": 32, 00:11:15.187 "min_cntlid": 1, 00:11:15.187 "max_cntlid": 65519, 00:11:15.187 "namespaces": [ 00:11:15.187 { 00:11:15.187 "nsid": 1, 00:11:15.187 "bdev_name": "Malloc2", 00:11:15.187 "name": "Malloc2", 00:11:15.187 "nguid": "F4B19DE0ABEB4710AB1FAEEABE3893A6", 00:11:15.187 "uuid": "f4b19de0-abeb-4710-ab1f-aeeabe3893a6" 00:11:15.187 } 00:11:15.187 ] 00:11:15.187 } 00:11:15.187 ] 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1016693 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:15.187 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:15.187 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.187 [2024-07-15 18:36:31.861868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:15.444 Malloc3 00:11:15.444 18:36:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:15.444 [2024-07-15 18:36:32.086643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:15.444 18:36:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:15.444 Asynchronous Event Request test 00:11:15.444 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:15.444 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:15.444 Registering asynchronous event callbacks... 00:11:15.444 Starting namespace attribute notice tests for all controllers... 00:11:15.444 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:15.444 aer_cb - Changed Namespace 00:11:15.444 Cleaning up... 00:11:15.703 [ 00:11:15.703 { 00:11:15.703 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:15.703 "subtype": "Discovery", 00:11:15.703 "listen_addresses": [], 00:11:15.703 "allow_any_host": true, 00:11:15.703 "hosts": [] 00:11:15.703 }, 00:11:15.703 { 00:11:15.703 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:15.703 "subtype": "NVMe", 00:11:15.703 "listen_addresses": [ 00:11:15.703 { 00:11:15.703 "trtype": "VFIOUSER", 00:11:15.703 "adrfam": "IPv4", 00:11:15.703 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:15.703 "trsvcid": "0" 00:11:15.703 } 00:11:15.703 ], 00:11:15.703 "allow_any_host": true, 00:11:15.703 "hosts": [], 00:11:15.703 "serial_number": "SPDK1", 00:11:15.703 "model_number": "SPDK bdev Controller", 00:11:15.704 "max_namespaces": 32, 00:11:15.704 "min_cntlid": 1, 00:11:15.704 "max_cntlid": 65519, 00:11:15.704 "namespaces": [ 00:11:15.704 { 00:11:15.704 "nsid": 1, 00:11:15.704 "bdev_name": "Malloc1", 00:11:15.704 "name": "Malloc1", 00:11:15.704 "nguid": "29C5AFC0E60D4579B73F9788C2B2E82A", 00:11:15.704 "uuid": "29c5afc0-e60d-4579-b73f-9788c2b2e82a" 00:11:15.704 }, 00:11:15.704 { 00:11:15.704 "nsid": 2, 00:11:15.704 "bdev_name": "Malloc3", 00:11:15.704 "name": "Malloc3", 00:11:15.704 "nguid": "99621748FBFD4161B778CF739BC7B38C", 00:11:15.704 "uuid": "99621748-fbfd-4161-b778-cf739bc7b38c" 00:11:15.704 } 00:11:15.704 ] 00:11:15.704 }, 00:11:15.704 { 00:11:15.704 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:15.704 "subtype": "NVMe", 00:11:15.704 "listen_addresses": [ 00:11:15.704 { 00:11:15.704 "trtype": "VFIOUSER", 00:11:15.704 "adrfam": "IPv4", 00:11:15.704 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:15.704 "trsvcid": "0" 00:11:15.704 } 00:11:15.704 ], 00:11:15.704 "allow_any_host": true, 00:11:15.704 "hosts": [], 00:11:15.704 "serial_number": "SPDK2", 00:11:15.704 "model_number": "SPDK bdev Controller", 00:11:15.704 "max_namespaces": 32, 00:11:15.704 "min_cntlid": 1, 00:11:15.704 "max_cntlid": 65519, 00:11:15.704 "namespaces": [ 00:11:15.704 { 00:11:15.704 "nsid": 1, 00:11:15.704 "bdev_name": "Malloc2", 00:11:15.704 "name": "Malloc2", 00:11:15.704 "nguid": "F4B19DE0ABEB4710AB1FAEEABE3893A6", 00:11:15.704 "uuid": "f4b19de0-abeb-4710-ab1f-aeeabe3893a6" 00:11:15.704 } 00:11:15.704 ] 00:11:15.704 } 00:11:15.704 ] 00:11:15.704 18:36:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1016693 00:11:15.704 18:36:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:15.704 18:36:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:15.704 18:36:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:15.704 18:36:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:15.704 [2024-07-15 18:36:32.320587] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:11:15.704 [2024-07-15 18:36:32.320623] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016706 ] 00:11:15.704 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.704 [2024-07-15 18:36:32.350635] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:15.704 [2024-07-15 18:36:32.361139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:15.704 [2024-07-15 18:36:32.361162] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd3a6e81000 00:11:15.704 [2024-07-15 18:36:32.362143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.363148] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.364153] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.365162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.366167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.367174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.368180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.369187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:15.704 [2024-07-15 18:36:32.370200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:15.704 [2024-07-15 18:36:32.370211] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd3a6e76000 00:11:15.704 [2024-07-15 18:36:32.371154] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:15.704 [2024-07-15 18:36:32.379680] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:15.704 [2024-07-15 18:36:32.379704] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:15.704 [2024-07-15 18:36:32.384806] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:15.704 [2024-07-15 18:36:32.384845] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:15.704 [2024-07-15 18:36:32.384915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:15.704 [2024-07-15 18:36:32.384931] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:15.704 [2024-07-15 18:36:32.384936] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:15.704 [2024-07-15 18:36:32.385795] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:15.704 [2024-07-15 18:36:32.385805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:15.704 [2024-07-15 18:36:32.385811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:15.704 [2024-07-15 18:36:32.386796] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:15.704 [2024-07-15 18:36:32.386805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:15.704 [2024-07-15 18:36:32.386812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:15.704 [2024-07-15 18:36:32.387800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:15.704 [2024-07-15 18:36:32.387809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:15.704 [2024-07-15 18:36:32.388812] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:15.704 [2024-07-15 18:36:32.388820] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:15.704 [2024-07-15 18:36:32.388825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:15.704 [2024-07-15 18:36:32.388831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:15.704 [2024-07-15 18:36:32.388936] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:15.704 [2024-07-15 18:36:32.388940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:15.704 [2024-07-15 18:36:32.388944] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:15.704 [2024-07-15 18:36:32.389822] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:15.704 [2024-07-15 18:36:32.390833] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:15.704 [2024-07-15 18:36:32.391849] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:15.704 [2024-07-15 18:36:32.392847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:15.704 [2024-07-15 18:36:32.392885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:15.704 [2024-07-15 18:36:32.393856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:15.704 [2024-07-15 18:36:32.393864] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:15.704 [2024-07-15 18:36:32.393869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:15.704 [2024-07-15 18:36:32.393886] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:15.704 [2024-07-15 18:36:32.393892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:15.704 [2024-07-15 18:36:32.393904] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:15.704 [2024-07-15 18:36:32.393909] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:15.704 [2024-07-15 18:36:32.393919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:15.704 [2024-07-15 18:36:32.401234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:15.704 [2024-07-15 18:36:32.401246] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:15.704 [2024-07-15 18:36:32.401253] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:15.704 [2024-07-15 18:36:32.401257] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:15.704 [2024-07-15 18:36:32.401261] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:15.704 [2024-07-15 18:36:32.401265] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:15.704 [2024-07-15 18:36:32.401269] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:15.704 [2024-07-15 18:36:32.401273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:15.704 [2024-07-15 18:36:32.401280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:15.704 [2024-07-15 18:36:32.401289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:15.704 [2024-07-15 18:36:32.409234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:15.704 [2024-07-15 18:36:32.409249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.704 [2024-07-15 18:36:32.409257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.705 [2024-07-15 18:36:32.409264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.705 [2024-07-15 18:36:32.409271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.705 [2024-07-15 18:36:32.409276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:15.705 [2024-07-15 18:36:32.409284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:15.705 [2024-07-15 18:36:32.409292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:15.964 [2024-07-15 18:36:32.416263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:15.964 [2024-07-15 18:36:32.416274] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:15.964 [2024-07-15 18:36:32.416280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:15.964 [2024-07-15 18:36:32.416286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:15.964 [2024-07-15 18:36:32.416292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:15.964 [2024-07-15 18:36:32.416300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:15.964 [2024-07-15 18:36:32.425230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:15.964 [2024-07-15 18:36:32.425282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:15.964 [2024-07-15 18:36:32.425290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:15.964 [2024-07-15 18:36:32.425297] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:15.964 [2024-07-15 18:36:32.425301] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:15.964 [2024-07-15 18:36:32.425307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:15.964 [2024-07-15 18:36:32.433230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:15.964 [2024-07-15 18:36:32.433241] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:15.964 [2024-07-15 18:36:32.433253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:15.964 [2024-07-15 18:36:32.433259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:15.964 [2024-07-15 18:36:32.433266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:15.964 [2024-07-15 18:36:32.433270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:15.964 [2024-07-15 18:36:32.433276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:15.964 [2024-07-15 18:36:32.441232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.441245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.441252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.441259] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:15.965 [2024-07-15 18:36:32.441263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:15.965 [2024-07-15 18:36:32.441268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.449233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.449242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.449248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.449254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.449260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.449264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.449269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.449278] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:15.965 [2024-07-15 18:36:32.449282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:15.965 [2024-07-15 18:36:32.449286] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:15.965 [2024-07-15 18:36:32.449302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.457232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.457244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.465233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.465244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.473231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.473244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.481231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.481247] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:15.965 [2024-07-15 18:36:32.481251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:15.965 [2024-07-15 18:36:32.481255] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:15.965 [2024-07-15 18:36:32.481258] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:15.965 [2024-07-15 18:36:32.481264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:15.965 [2024-07-15 18:36:32.481270] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:15.965 [2024-07-15 18:36:32.481275] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:15.965 [2024-07-15 18:36:32.481280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.481286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:15.965 [2024-07-15 18:36:32.481290] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:15.965 [2024-07-15 18:36:32.481296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.481302] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:15.965 [2024-07-15 18:36:32.481306] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:15.965 [2024-07-15 18:36:32.481312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.489230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.489249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.489261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.489267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:15.965 ===================================================== 00:11:15.965 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:15.965 ===================================================== 00:11:15.965 Controller Capabilities/Features 00:11:15.965 ================================ 00:11:15.965 Vendor ID: 4e58 00:11:15.965 Subsystem Vendor ID: 4e58 00:11:15.965 Serial Number: SPDK2 00:11:15.965 Model Number: SPDK bdev Controller 00:11:15.965 Firmware Version: 24.09 00:11:15.965 Recommended Arb Burst: 6 00:11:15.965 IEEE OUI Identifier: 8d 6b 50 00:11:15.965 Multi-path I/O 00:11:15.965 May have multiple subsystem ports: Yes 00:11:15.965 May have multiple controllers: Yes 00:11:15.965 Associated with SR-IOV VF: No 00:11:15.965 Max Data Transfer Size: 131072 00:11:15.965 Max Number of Namespaces: 32 00:11:15.965 Max Number of I/O Queues: 127 00:11:15.965 NVMe Specification Version (VS): 1.3 00:11:15.965 NVMe Specification Version (Identify): 1.3 00:11:15.965 Maximum Queue Entries: 256 00:11:15.965 Contiguous Queues Required: Yes 00:11:15.965 Arbitration Mechanisms Supported 00:11:15.965 Weighted Round Robin: Not Supported 00:11:15.965 Vendor Specific: Not Supported 00:11:15.965 Reset Timeout: 15000 ms 00:11:15.965 Doorbell Stride: 4 bytes 00:11:15.965 NVM Subsystem Reset: Not Supported 00:11:15.965 Command Sets Supported 00:11:15.965 NVM Command Set: Supported 00:11:15.965 Boot Partition: Not Supported 00:11:15.965 Memory Page Size Minimum: 4096 bytes 00:11:15.965 Memory Page Size Maximum: 4096 bytes 00:11:15.965 Persistent Memory Region: Not Supported 00:11:15.965 Optional Asynchronous Events Supported 00:11:15.965 Namespace Attribute Notices: Supported 00:11:15.965 Firmware Activation Notices: Not Supported 00:11:15.965 ANA Change Notices: Not Supported 00:11:15.965 PLE Aggregate Log Change Notices: Not Supported 00:11:15.965 LBA Status Info Alert Notices: Not Supported 00:11:15.965 EGE Aggregate Log Change Notices: Not Supported 00:11:15.965 Normal NVM Subsystem Shutdown event: Not Supported 00:11:15.965 Zone Descriptor Change Notices: Not Supported 00:11:15.965 Discovery Log Change Notices: Not Supported 00:11:15.965 Controller Attributes 00:11:15.965 128-bit Host Identifier: Supported 00:11:15.965 Non-Operational Permissive Mode: Not Supported 00:11:15.965 NVM Sets: Not Supported 00:11:15.965 Read Recovery Levels: Not Supported 00:11:15.965 Endurance Groups: Not Supported 00:11:15.965 Predictable Latency Mode: Not Supported 00:11:15.965 Traffic Based Keep ALive: Not Supported 00:11:15.965 Namespace Granularity: Not Supported 00:11:15.965 SQ Associations: Not Supported 00:11:15.965 UUID List: Not Supported 00:11:15.965 Multi-Domain Subsystem: Not Supported 00:11:15.965 Fixed Capacity Management: Not Supported 00:11:15.965 Variable Capacity Management: Not Supported 00:11:15.965 Delete Endurance Group: Not Supported 00:11:15.965 Delete NVM Set: Not Supported 00:11:15.965 Extended LBA Formats Supported: Not Supported 00:11:15.965 Flexible Data Placement Supported: Not Supported 00:11:15.965 00:11:15.965 Controller Memory Buffer Support 00:11:15.965 ================================ 00:11:15.965 Supported: No 00:11:15.965 00:11:15.965 Persistent Memory Region Support 00:11:15.965 ================================ 00:11:15.965 Supported: No 00:11:15.965 00:11:15.965 Admin Command Set Attributes 00:11:15.965 ============================ 00:11:15.965 Security Send/Receive: Not Supported 00:11:15.965 Format NVM: Not Supported 00:11:15.965 Firmware Activate/Download: Not Supported 00:11:15.965 Namespace Management: Not Supported 00:11:15.965 Device Self-Test: Not Supported 00:11:15.965 Directives: Not Supported 00:11:15.965 NVMe-MI: Not Supported 00:11:15.965 Virtualization Management: Not Supported 00:11:15.965 Doorbell Buffer Config: Not Supported 00:11:15.965 Get LBA Status Capability: Not Supported 00:11:15.965 Command & Feature Lockdown Capability: Not Supported 00:11:15.965 Abort Command Limit: 4 00:11:15.965 Async Event Request Limit: 4 00:11:15.965 Number of Firmware Slots: N/A 00:11:15.965 Firmware Slot 1 Read-Only: N/A 00:11:15.965 Firmware Activation Without Reset: N/A 00:11:15.965 Multiple Update Detection Support: N/A 00:11:15.965 Firmware Update Granularity: No Information Provided 00:11:15.965 Per-Namespace SMART Log: No 00:11:15.965 Asymmetric Namespace Access Log Page: Not Supported 00:11:15.965 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:15.965 Command Effects Log Page: Supported 00:11:15.965 Get Log Page Extended Data: Supported 00:11:15.965 Telemetry Log Pages: Not Supported 00:11:15.965 Persistent Event Log Pages: Not Supported 00:11:15.965 Supported Log Pages Log Page: May Support 00:11:15.965 Commands Supported & Effects Log Page: Not Supported 00:11:15.965 Feature Identifiers & Effects Log Page:May Support 00:11:15.965 NVMe-MI Commands & Effects Log Page: May Support 00:11:15.965 Data Area 4 for Telemetry Log: Not Supported 00:11:15.965 Error Log Page Entries Supported: 128 00:11:15.965 Keep Alive: Supported 00:11:15.965 Keep Alive Granularity: 10000 ms 00:11:15.965 00:11:15.965 NVM Command Set Attributes 00:11:15.965 ========================== 00:11:15.965 Submission Queue Entry Size 00:11:15.965 Max: 64 00:11:15.965 Min: 64 00:11:15.965 Completion Queue Entry Size 00:11:15.965 Max: 16 00:11:15.965 Min: 16 00:11:15.965 Number of Namespaces: 32 00:11:15.965 Compare Command: Supported 00:11:15.965 Write Uncorrectable Command: Not Supported 00:11:15.965 Dataset Management Command: Supported 00:11:15.965 Write Zeroes Command: Supported 00:11:15.965 Set Features Save Field: Not Supported 00:11:15.965 Reservations: Not Supported 00:11:15.965 Timestamp: Not Supported 00:11:15.965 Copy: Supported 00:11:15.965 Volatile Write Cache: Present 00:11:15.965 Atomic Write Unit (Normal): 1 00:11:15.965 Atomic Write Unit (PFail): 1 00:11:15.965 Atomic Compare & Write Unit: 1 00:11:15.965 Fused Compare & Write: Supported 00:11:15.965 Scatter-Gather List 00:11:15.965 SGL Command Set: Supported (Dword aligned) 00:11:15.965 SGL Keyed: Not Supported 00:11:15.965 SGL Bit Bucket Descriptor: Not Supported 00:11:15.965 SGL Metadata Pointer: Not Supported 00:11:15.965 Oversized SGL: Not Supported 00:11:15.965 SGL Metadata Address: Not Supported 00:11:15.965 SGL Offset: Not Supported 00:11:15.965 Transport SGL Data Block: Not Supported 00:11:15.965 Replay Protected Memory Block: Not Supported 00:11:15.965 00:11:15.965 Firmware Slot Information 00:11:15.965 ========================= 00:11:15.965 Active slot: 1 00:11:15.965 Slot 1 Firmware Revision: 24.09 00:11:15.965 00:11:15.965 00:11:15.965 Commands Supported and Effects 00:11:15.965 ============================== 00:11:15.965 Admin Commands 00:11:15.965 -------------- 00:11:15.965 Get Log Page (02h): Supported 00:11:15.965 Identify (06h): Supported 00:11:15.965 Abort (08h): Supported 00:11:15.965 Set Features (09h): Supported 00:11:15.965 Get Features (0Ah): Supported 00:11:15.965 Asynchronous Event Request (0Ch): Supported 00:11:15.965 Keep Alive (18h): Supported 00:11:15.965 I/O Commands 00:11:15.965 ------------ 00:11:15.965 Flush (00h): Supported LBA-Change 00:11:15.965 Write (01h): Supported LBA-Change 00:11:15.965 Read (02h): Supported 00:11:15.965 Compare (05h): Supported 00:11:15.965 Write Zeroes (08h): Supported LBA-Change 00:11:15.965 Dataset Management (09h): Supported LBA-Change 00:11:15.965 Copy (19h): Supported LBA-Change 00:11:15.965 00:11:15.965 Error Log 00:11:15.965 ========= 00:11:15.965 00:11:15.965 Arbitration 00:11:15.965 =========== 00:11:15.965 Arbitration Burst: 1 00:11:15.965 00:11:15.965 Power Management 00:11:15.965 ================ 00:11:15.965 Number of Power States: 1 00:11:15.965 Current Power State: Power State #0 00:11:15.965 Power State #0: 00:11:15.965 Max Power: 0.00 W 00:11:15.965 Non-Operational State: Operational 00:11:15.965 Entry Latency: Not Reported 00:11:15.965 Exit Latency: Not Reported 00:11:15.965 Relative Read Throughput: 0 00:11:15.965 Relative Read Latency: 0 00:11:15.965 Relative Write Throughput: 0 00:11:15.965 Relative Write Latency: 0 00:11:15.965 Idle Power: Not Reported 00:11:15.965 Active Power: Not Reported 00:11:15.965 Non-Operational Permissive Mode: Not Supported 00:11:15.965 00:11:15.965 Health Information 00:11:15.965 ================== 00:11:15.965 Critical Warnings: 00:11:15.965 Available Spare Space: OK 00:11:15.965 Temperature: OK 00:11:15.965 Device Reliability: OK 00:11:15.965 Read Only: No 00:11:15.965 Volatile Memory Backup: OK 00:11:15.965 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:15.965 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:15.965 Available Spare: 0% 00:11:15.965 Available Sp[2024-07-15 18:36:32.489354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:15.965 [2024-07-15 18:36:32.497231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.497263] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:15.965 [2024-07-15 18:36:32.497272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.497278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.497283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.497288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.965 [2024-07-15 18:36:32.497327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:15.965 [2024-07-15 18:36:32.497337] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:15.965 [2024-07-15 18:36:32.498331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:15.965 [2024-07-15 18:36:32.498374] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:15.965 [2024-07-15 18:36:32.498381] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:15.965 [2024-07-15 18:36:32.499339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:15.965 [2024-07-15 18:36:32.499350] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:15.965 [2024-07-15 18:36:32.499396] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:15.965 [2024-07-15 18:36:32.500373] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:15.965 are Threshold: 0% 00:11:15.965 Life Percentage Used: 0% 00:11:15.965 Data Units Read: 0 00:11:15.965 Data Units Written: 0 00:11:15.965 Host Read Commands: 0 00:11:15.965 Host Write Commands: 0 00:11:15.965 Controller Busy Time: 0 minutes 00:11:15.965 Power Cycles: 0 00:11:15.965 Power On Hours: 0 hours 00:11:15.965 Unsafe Shutdowns: 0 00:11:15.965 Unrecoverable Media Errors: 0 00:11:15.965 Lifetime Error Log Entries: 0 00:11:15.965 Warning Temperature Time: 0 minutes 00:11:15.965 Critical Temperature Time: 0 minutes 00:11:15.965 00:11:15.965 Number of Queues 00:11:15.965 ================ 00:11:15.965 Number of I/O Submission Queues: 127 00:11:15.965 Number of I/O Completion Queues: 127 00:11:15.965 00:11:15.965 Active Namespaces 00:11:15.965 ================= 00:11:15.965 Namespace ID:1 00:11:15.965 Error Recovery Timeout: Unlimited 00:11:15.965 Command Set Identifier: NVM (00h) 00:11:15.965 Deallocate: Supported 00:11:15.965 Deallocated/Unwritten Error: Not Supported 00:11:15.965 Deallocated Read Value: Unknown 00:11:15.965 Deallocate in Write Zeroes: Not Supported 00:11:15.965 Deallocated Guard Field: 0xFFFF 00:11:15.965 Flush: Supported 00:11:15.965 Reservation: Supported 00:11:15.965 Namespace Sharing Capabilities: Multiple Controllers 00:11:15.965 Size (in LBAs): 131072 (0GiB) 00:11:15.965 Capacity (in LBAs): 131072 (0GiB) 00:11:15.965 Utilization (in LBAs): 131072 (0GiB) 00:11:15.965 NGUID: F4B19DE0ABEB4710AB1FAEEABE3893A6 00:11:15.966 UUID: f4b19de0-abeb-4710-ab1f-aeeabe3893a6 00:11:15.966 Thin Provisioning: Not Supported 00:11:15.966 Per-NS Atomic Units: Yes 00:11:15.966 Atomic Boundary Size (Normal): 0 00:11:15.966 Atomic Boundary Size (PFail): 0 00:11:15.966 Atomic Boundary Offset: 0 00:11:15.966 Maximum Single Source Range Length: 65535 00:11:15.966 Maximum Copy Length: 65535 00:11:15.966 Maximum Source Range Count: 1 00:11:15.966 NGUID/EUI64 Never Reused: No 00:11:15.966 Namespace Write Protected: No 00:11:15.966 Number of LBA Formats: 1 00:11:15.966 Current LBA Format: LBA Format #00 00:11:15.966 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:15.966 00:11:15.966 18:36:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:15.966 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.223 [2024-07-15 18:36:32.714576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:21.531 Initializing NVMe Controllers 00:11:21.531 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:21.531 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:21.531 Initialization complete. Launching workers. 00:11:21.531 ======================================================== 00:11:21.531 Latency(us) 00:11:21.531 Device Information : IOPS MiB/s Average min max 00:11:21.531 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39944.06 156.03 3204.31 970.88 6654.73 00:11:21.531 ======================================================== 00:11:21.531 Total : 39944.06 156.03 3204.31 970.88 6654.73 00:11:21.531 00:11:21.531 [2024-07-15 18:36:37.821475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:21.531 18:36:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:21.531 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.532 [2024-07-15 18:36:38.041046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:26.806 Initializing NVMe Controllers 00:11:26.806 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:26.806 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:26.806 Initialization complete. Launching workers. 00:11:26.806 ======================================================== 00:11:26.806 Latency(us) 00:11:26.806 Device Information : IOPS MiB/s Average min max 00:11:26.806 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39912.98 155.91 3206.55 960.52 7538.13 00:11:26.806 ======================================================== 00:11:26.806 Total : 39912.98 155.91 3206.55 960.52 7538.13 00:11:26.806 00:11:26.806 [2024-07-15 18:36:43.058975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:26.806 18:36:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:26.806 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.806 [2024-07-15 18:36:43.245363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:32.079 [2024-07-15 18:36:48.376316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:32.079 Initializing NVMe Controllers 00:11:32.079 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:32.079 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:32.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:32.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:32.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:32.079 Initialization complete. Launching workers. 00:11:32.079 Starting thread on core 2 00:11:32.079 Starting thread on core 3 00:11:32.079 Starting thread on core 1 00:11:32.079 18:36:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:32.079 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.079 [2024-07-15 18:36:48.656658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:35.399 [2024-07-15 18:36:51.736174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:35.399 Initializing NVMe Controllers 00:11:35.399 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:35.399 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:35.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:35.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:35.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:35.399 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:35.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:35.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:35.399 Initialization complete. Launching workers. 00:11:35.399 Starting thread on core 1 with urgent priority queue 00:11:35.399 Starting thread on core 2 with urgent priority queue 00:11:35.399 Starting thread on core 3 with urgent priority queue 00:11:35.399 Starting thread on core 0 with urgent priority queue 00:11:35.399 SPDK bdev Controller (SPDK2 ) core 0: 9099.67 IO/s 10.99 secs/100000 ios 00:11:35.399 SPDK bdev Controller (SPDK2 ) core 1: 8009.33 IO/s 12.49 secs/100000 ios 00:11:35.400 SPDK bdev Controller (SPDK2 ) core 2: 8794.67 IO/s 11.37 secs/100000 ios 00:11:35.400 SPDK bdev Controller (SPDK2 ) core 3: 9804.67 IO/s 10.20 secs/100000 ios 00:11:35.400 ======================================================== 00:11:35.400 00:11:35.400 18:36:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:35.400 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.400 [2024-07-15 18:36:52.011678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:35.400 Initializing NVMe Controllers 00:11:35.400 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:35.400 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:35.400 Namespace ID: 1 size: 0GB 00:11:35.400 Initialization complete. 00:11:35.400 INFO: using host memory buffer for IO 00:11:35.400 Hello world! 00:11:35.400 [2024-07-15 18:36:52.024767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:35.400 18:36:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:35.659 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.659 [2024-07-15 18:36:52.292275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:37.038 Initializing NVMe Controllers 00:11:37.038 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:37.038 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:37.038 Initialization complete. Launching workers. 00:11:37.038 submit (in ns) avg, min, max = 6689.4, 3267.8, 4000281.7 00:11:37.038 complete (in ns) avg, min, max = 22534.6, 1787.0, 3999611.3 00:11:37.038 00:11:37.038 Submit histogram 00:11:37.038 ================ 00:11:37.038 Range in us Cumulative Count 00:11:37.038 3.256 - 3.270: 0.0062% ( 1) 00:11:37.038 3.270 - 3.283: 0.0246% ( 3) 00:11:37.038 3.283 - 3.297: 0.1109% ( 14) 00:11:37.038 3.297 - 3.311: 0.3450% ( 38) 00:11:37.038 3.311 - 3.325: 0.7577% ( 67) 00:11:37.038 3.325 - 3.339: 2.4580% ( 276) 00:11:37.038 3.339 - 3.353: 6.5299% ( 661) 00:11:37.038 3.353 - 3.367: 12.1111% ( 906) 00:11:37.038 3.367 - 3.381: 18.2160% ( 991) 00:11:37.038 3.381 - 3.395: 24.6165% ( 1039) 00:11:37.038 3.395 - 3.409: 30.5612% ( 965) 00:11:37.038 3.409 - 3.423: 35.7666% ( 845) 00:11:37.038 3.423 - 3.437: 41.4588% ( 924) 00:11:37.038 3.437 - 3.450: 46.3377% ( 792) 00:11:37.038 3.450 - 3.464: 49.9415% ( 585) 00:11:37.038 3.464 - 3.478: 53.6253% ( 598) 00:11:37.038 3.478 - 3.492: 59.0895% ( 887) 00:11:37.038 3.492 - 3.506: 65.9829% ( 1119) 00:11:37.038 3.506 - 3.520: 70.3074% ( 702) 00:11:37.038 3.520 - 3.534: 75.0508% ( 770) 00:11:37.038 3.534 - 3.548: 79.8682% ( 782) 00:11:37.038 3.548 - 3.562: 83.1639% ( 535) 00:11:37.038 3.562 - 3.590: 86.6198% ( 561) 00:11:37.038 3.590 - 3.617: 87.8334% ( 197) 00:11:37.038 3.617 - 3.645: 88.8252% ( 161) 00:11:37.038 3.645 - 3.673: 90.5255% ( 276) 00:11:37.038 3.673 - 3.701: 92.2134% ( 274) 00:11:37.038 3.701 - 3.729: 93.6118% ( 227) 00:11:37.038 3.729 - 3.757: 95.4722% ( 302) 00:11:37.038 3.757 - 3.784: 96.9691% ( 243) 00:11:37.038 3.784 - 3.812: 98.1704% ( 195) 00:11:37.038 3.812 - 3.840: 98.8973% ( 118) 00:11:37.038 3.840 - 3.868: 99.2669% ( 60) 00:11:37.038 3.868 - 3.896: 99.5257% ( 42) 00:11:37.038 3.896 - 3.923: 99.6304% ( 17) 00:11:37.038 3.923 - 3.951: 99.6489% ( 3) 00:11:37.038 3.951 - 3.979: 99.6612% ( 2) 00:11:37.038 4.007 - 4.035: 99.6673% ( 1) 00:11:37.038 4.090 - 4.118: 99.6735% ( 1) 00:11:37.038 5.315 - 5.343: 99.6797% ( 1) 00:11:37.038 5.398 - 5.426: 99.6858% ( 1) 00:11:37.038 5.426 - 5.454: 99.7043% ( 3) 00:11:37.038 5.454 - 5.482: 99.7105% ( 1) 00:11:37.038 5.482 - 5.510: 99.7228% ( 2) 00:11:37.038 5.510 - 5.537: 99.7351% ( 2) 00:11:37.038 5.565 - 5.593: 99.7413% ( 1) 00:11:37.038 5.677 - 5.704: 99.7474% ( 1) 00:11:37.038 5.816 - 5.843: 99.7536% ( 1) 00:11:37.038 6.010 - 6.038: 99.7597% ( 1) 00:11:37.038 6.261 - 6.289: 99.7659% ( 1) 00:11:37.038 6.289 - 6.317: 99.7721% ( 1) 00:11:37.038 6.400 - 6.428: 99.7782% ( 1) 00:11:37.038 6.428 - 6.456: 99.7844% ( 1) 00:11:37.038 6.456 - 6.483: 99.7906% ( 1) 00:11:37.038 6.539 - 6.567: 99.8029% ( 2) 00:11:37.038 6.567 - 6.595: 99.8090% ( 1) 00:11:37.038 6.706 - 6.734: 99.8152% ( 1) 00:11:37.038 6.734 - 6.762: 99.8214% ( 1) 00:11:37.038 6.817 - 6.845: 99.8337% ( 2) 00:11:37.038 6.873 - 6.901: 99.8398% ( 1) 00:11:37.038 6.901 - 6.929: 99.8522% ( 2) 00:11:37.038 6.929 - 6.957: 99.8645% ( 2) 00:11:37.038 7.402 - 7.457: 99.8706% ( 1) 00:11:37.038 7.513 - 7.569: 99.8768% ( 1) 00:11:37.038 7.847 - 7.903: 99.8830% ( 1) 00:11:37.038 8.070 - 8.125: 99.8891% ( 1) 00:11:37.038 8.125 - 8.181: 99.8953% ( 1) 00:11:37.038 8.237 - 8.292: 99.9014% ( 1) 00:11:37.038 8.626 - 8.682: 99.9076% ( 1) 00:11:37.038 8.737 - 8.793: 99.9138% ( 1) 00:11:37.038 8.849 - 8.904: 99.9199% ( 1) 00:11:37.038 3989.148 - 4017.642: 100.0000% ( 13) 00:11:37.038 00:11:37.038 Complete histogram 00:11:37.038 ================== 00:11:37.038 Range in us Cumulative Count 00:11:37.038 1.781 - 1.795: 0.0123% ( 2) 00:11:37.038 1.795 - 1.809: 0.0185% ( 1) 00:11:37.038 1.809 - 1.823: 0.9302% ( 148) 00:11:37.038 1.823 - 1.837: 8.4273% ( 1217) 00:11:37.038 1.837 - 1.850: 13.9407% ( 895) 00:11:37.038 1.850 - [2024-07-15 18:36:53.386276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:37.038 1.864: 16.2755% ( 379) 00:11:37.038 1.864 - 1.878: 41.0152% ( 4016) 00:11:37.038 1.878 - 1.892: 84.0264% ( 6982) 00:11:37.038 1.892 - 1.906: 93.4393% ( 1528) 00:11:37.038 1.906 - 1.920: 95.5399% ( 341) 00:11:37.038 1.920 - 1.934: 96.2730% ( 119) 00:11:37.038 1.934 - 1.948: 97.1047% ( 135) 00:11:37.038 1.948 - 1.962: 98.2874% ( 192) 00:11:37.038 1.962 - 1.976: 98.9959% ( 115) 00:11:37.038 1.976 - 1.990: 99.1622% ( 27) 00:11:37.038 1.990 - 2.003: 99.2053% ( 7) 00:11:37.038 2.003 - 2.017: 99.2361% ( 5) 00:11:37.038 2.017 - 2.031: 99.2423% ( 1) 00:11:37.038 2.045 - 2.059: 99.2608% ( 3) 00:11:37.038 2.059 - 2.073: 99.2731% ( 2) 00:11:37.038 2.073 - 2.087: 99.2792% ( 1) 00:11:37.038 2.087 - 2.101: 99.2854% ( 1) 00:11:37.038 2.254 - 2.268: 99.2916% ( 1) 00:11:37.038 2.282 - 2.296: 99.2977% ( 1) 00:11:37.038 2.407 - 2.421: 99.3039% ( 1) 00:11:37.038 2.463 - 2.477: 99.3100% ( 1) 00:11:37.038 3.701 - 3.729: 99.3162% ( 1) 00:11:37.038 3.729 - 3.757: 99.3224% ( 1) 00:11:37.038 3.896 - 3.923: 99.3285% ( 1) 00:11:37.038 3.923 - 3.951: 99.3347% ( 1) 00:11:37.038 3.951 - 3.979: 99.3408% ( 1) 00:11:37.038 3.979 - 4.007: 99.3470% ( 1) 00:11:37.038 4.007 - 4.035: 99.3532% ( 1) 00:11:37.038 4.063 - 4.090: 99.3593% ( 1) 00:11:37.038 4.090 - 4.118: 99.3655% ( 1) 00:11:37.038 4.369 - 4.397: 99.3717% ( 1) 00:11:37.038 4.730 - 4.758: 99.3778% ( 1) 00:11:37.038 4.925 - 4.953: 99.3840% ( 1) 00:11:37.038 5.009 - 5.037: 99.3901% ( 1) 00:11:37.038 5.037 - 5.064: 99.3963% ( 1) 00:11:37.038 5.092 - 5.120: 99.4025% ( 1) 00:11:37.038 5.287 - 5.315: 99.4086% ( 1) 00:11:37.038 5.398 - 5.426: 99.4148% ( 1) 00:11:37.038 5.426 - 5.454: 99.4209% ( 1) 00:11:37.038 5.510 - 5.537: 99.4271% ( 1) 00:11:37.038 5.593 - 5.621: 99.4333% ( 1) 00:11:37.038 5.704 - 5.732: 99.4456% ( 2) 00:11:37.038 5.927 - 5.955: 99.4517% ( 1) 00:11:37.038 5.955 - 5.983: 99.4579% ( 1) 00:11:37.038 6.428 - 6.456: 99.4641% ( 1) 00:11:37.038 6.734 - 6.762: 99.4764% ( 2) 00:11:37.038 7.569 - 7.624: 99.4825% ( 1) 00:11:37.038 3704.209 - 3732.703: 99.4887% ( 1) 00:11:37.038 3989.148 - 4017.642: 100.0000% ( 83) 00:11:37.038 00:11:37.038 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:37.038 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:37.038 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:37.038 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:37.038 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:37.038 [ 00:11:37.038 { 00:11:37.038 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:37.038 "subtype": "Discovery", 00:11:37.038 "listen_addresses": [], 00:11:37.038 "allow_any_host": true, 00:11:37.038 "hosts": [] 00:11:37.038 }, 00:11:37.038 { 00:11:37.038 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:37.038 "subtype": "NVMe", 00:11:37.038 "listen_addresses": [ 00:11:37.038 { 00:11:37.038 "trtype": "VFIOUSER", 00:11:37.038 "adrfam": "IPv4", 00:11:37.038 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:37.038 "trsvcid": "0" 00:11:37.038 } 00:11:37.038 ], 00:11:37.038 "allow_any_host": true, 00:11:37.038 "hosts": [], 00:11:37.038 "serial_number": "SPDK1", 00:11:37.038 "model_number": "SPDK bdev Controller", 00:11:37.038 "max_namespaces": 32, 00:11:37.038 "min_cntlid": 1, 00:11:37.038 "max_cntlid": 65519, 00:11:37.038 "namespaces": [ 00:11:37.038 { 00:11:37.038 "nsid": 1, 00:11:37.038 "bdev_name": "Malloc1", 00:11:37.038 "name": "Malloc1", 00:11:37.038 "nguid": "29C5AFC0E60D4579B73F9788C2B2E82A", 00:11:37.038 "uuid": "29c5afc0-e60d-4579-b73f-9788c2b2e82a" 00:11:37.038 }, 00:11:37.038 { 00:11:37.038 "nsid": 2, 00:11:37.039 "bdev_name": "Malloc3", 00:11:37.039 "name": "Malloc3", 00:11:37.039 "nguid": "99621748FBFD4161B778CF739BC7B38C", 00:11:37.039 "uuid": "99621748-fbfd-4161-b778-cf739bc7b38c" 00:11:37.039 } 00:11:37.039 ] 00:11:37.039 }, 00:11:37.039 { 00:11:37.039 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:37.039 "subtype": "NVMe", 00:11:37.039 "listen_addresses": [ 00:11:37.039 { 00:11:37.039 "trtype": "VFIOUSER", 00:11:37.039 "adrfam": "IPv4", 00:11:37.039 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:37.039 "trsvcid": "0" 00:11:37.039 } 00:11:37.039 ], 00:11:37.039 "allow_any_host": true, 00:11:37.039 "hosts": [], 00:11:37.039 "serial_number": "SPDK2", 00:11:37.039 "model_number": "SPDK bdev Controller", 00:11:37.039 "max_namespaces": 32, 00:11:37.039 "min_cntlid": 1, 00:11:37.039 "max_cntlid": 65519, 00:11:37.039 "namespaces": [ 00:11:37.039 { 00:11:37.039 "nsid": 1, 00:11:37.039 "bdev_name": "Malloc2", 00:11:37.039 "name": "Malloc2", 00:11:37.039 "nguid": "F4B19DE0ABEB4710AB1FAEEABE3893A6", 00:11:37.039 "uuid": "f4b19de0-abeb-4710-ab1f-aeeabe3893a6" 00:11:37.039 } 00:11:37.039 ] 00:11:37.039 } 00:11:37.039 ] 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1020256 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:37.039 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:37.039 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.298 [2024-07-15 18:36:53.760902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:37.298 Malloc4 00:11:37.298 18:36:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:37.298 [2024-07-15 18:36:53.978525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:37.298 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:37.557 Asynchronous Event Request test 00:11:37.557 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:37.557 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:37.557 Registering asynchronous event callbacks... 00:11:37.557 Starting namespace attribute notice tests for all controllers... 00:11:37.557 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:37.557 aer_cb - Changed Namespace 00:11:37.557 Cleaning up... 00:11:37.557 [ 00:11:37.557 { 00:11:37.557 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:37.557 "subtype": "Discovery", 00:11:37.557 "listen_addresses": [], 00:11:37.557 "allow_any_host": true, 00:11:37.557 "hosts": [] 00:11:37.557 }, 00:11:37.557 { 00:11:37.557 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:37.557 "subtype": "NVMe", 00:11:37.557 "listen_addresses": [ 00:11:37.557 { 00:11:37.557 "trtype": "VFIOUSER", 00:11:37.557 "adrfam": "IPv4", 00:11:37.557 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:37.557 "trsvcid": "0" 00:11:37.557 } 00:11:37.557 ], 00:11:37.557 "allow_any_host": true, 00:11:37.557 "hosts": [], 00:11:37.557 "serial_number": "SPDK1", 00:11:37.557 "model_number": "SPDK bdev Controller", 00:11:37.557 "max_namespaces": 32, 00:11:37.557 "min_cntlid": 1, 00:11:37.557 "max_cntlid": 65519, 00:11:37.557 "namespaces": [ 00:11:37.557 { 00:11:37.557 "nsid": 1, 00:11:37.557 "bdev_name": "Malloc1", 00:11:37.557 "name": "Malloc1", 00:11:37.557 "nguid": "29C5AFC0E60D4579B73F9788C2B2E82A", 00:11:37.557 "uuid": "29c5afc0-e60d-4579-b73f-9788c2b2e82a" 00:11:37.557 }, 00:11:37.557 { 00:11:37.557 "nsid": 2, 00:11:37.557 "bdev_name": "Malloc3", 00:11:37.557 "name": "Malloc3", 00:11:37.557 "nguid": "99621748FBFD4161B778CF739BC7B38C", 00:11:37.557 "uuid": "99621748-fbfd-4161-b778-cf739bc7b38c" 00:11:37.557 } 00:11:37.557 ] 00:11:37.557 }, 00:11:37.557 { 00:11:37.557 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:37.557 "subtype": "NVMe", 00:11:37.557 "listen_addresses": [ 00:11:37.557 { 00:11:37.557 "trtype": "VFIOUSER", 00:11:37.557 "adrfam": "IPv4", 00:11:37.557 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:37.557 "trsvcid": "0" 00:11:37.557 } 00:11:37.557 ], 00:11:37.557 "allow_any_host": true, 00:11:37.557 "hosts": [], 00:11:37.557 "serial_number": "SPDK2", 00:11:37.557 "model_number": "SPDK bdev Controller", 00:11:37.557 "max_namespaces": 32, 00:11:37.557 "min_cntlid": 1, 00:11:37.557 "max_cntlid": 65519, 00:11:37.557 "namespaces": [ 00:11:37.557 { 00:11:37.557 "nsid": 1, 00:11:37.557 "bdev_name": "Malloc2", 00:11:37.557 "name": "Malloc2", 00:11:37.557 "nguid": "F4B19DE0ABEB4710AB1FAEEABE3893A6", 00:11:37.557 "uuid": "f4b19de0-abeb-4710-ab1f-aeeabe3893a6" 00:11:37.557 }, 00:11:37.557 { 00:11:37.557 "nsid": 2, 00:11:37.557 "bdev_name": "Malloc4", 00:11:37.557 "name": "Malloc4", 00:11:37.557 "nguid": "CAB5B4AD0EB14C85A726ECB1B89BD5A4", 00:11:37.557 "uuid": "cab5b4ad-0eb1-4c85-a726-ecb1b89bd5a4" 00:11:37.557 } 00:11:37.557 ] 00:11:37.557 } 00:11:37.557 ] 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1020256 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1012142 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1012142 ']' 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1012142 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1012142 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1012142' 00:11:37.557 killing process with pid 1012142 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1012142 00:11:37.557 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1012142 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1020403 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1020403' 00:11:37.816 Process pid: 1020403 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1020403 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1020403 ']' 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.816 18:36:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:38.076 [2024-07-15 18:36:54.543017] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:38.076 [2024-07-15 18:36:54.543880] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:11:38.076 [2024-07-15 18:36:54.543917] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.076 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.076 [2024-07-15 18:36:54.598548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.076 [2024-07-15 18:36:54.665887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.076 [2024-07-15 18:36:54.665929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.076 [2024-07-15 18:36:54.665936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.076 [2024-07-15 18:36:54.665941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.076 [2024-07-15 18:36:54.665946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.076 [2024-07-15 18:36:54.666033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.076 [2024-07-15 18:36:54.666132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.076 [2024-07-15 18:36:54.666216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.076 [2024-07-15 18:36:54.666217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.076 [2024-07-15 18:36:54.745908] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:38.076 [2024-07-15 18:36:54.746036] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:38.076 [2024-07-15 18:36:54.746286] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:38.076 [2024-07-15 18:36:54.746595] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:38.076 [2024-07-15 18:36:54.746781] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:39.014 18:36:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.014 18:36:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:39.014 18:36:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:39.952 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:39.952 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:39.952 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:39.952 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:39.952 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:39.952 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:40.212 Malloc1 00:11:40.212 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:40.212 18:36:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:40.471 18:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:40.730 18:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:40.730 18:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:40.730 18:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:40.989 Malloc2 00:11:40.989 18:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:40.989 18:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:41.248 18:36:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1020403 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1020403 ']' 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1020403 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1020403 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1020403' 00:11:41.508 killing process with pid 1020403 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1020403 00:11:41.508 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1020403 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.767 00:11:41.767 real 0m51.383s 00:11:41.767 user 3m23.564s 00:11:41.767 sys 0m3.635s 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:41.767 ************************************ 00:11:41.767 END TEST nvmf_vfio_user 00:11:41.767 ************************************ 00:11:41.767 18:36:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:41.767 18:36:58 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:41.767 18:36:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.767 18:36:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.767 18:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.767 ************************************ 00:11:41.767 START TEST nvmf_vfio_user_nvme_compliance 00:11:41.767 ************************************ 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:41.767 * Looking for test storage... 00:11:41.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.767 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.768 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1021163 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1021163' 00:11:42.027 Process pid: 1021163 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1021163 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1021163 ']' 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.027 18:36:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:42.027 [2024-07-15 18:36:58.528271] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:11:42.027 [2024-07-15 18:36:58.528325] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.027 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.027 [2024-07-15 18:36:58.582593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.027 [2024-07-15 18:36:58.661654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.027 [2024-07-15 18:36:58.661688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.027 [2024-07-15 18:36:58.661696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.027 [2024-07-15 18:36:58.661702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.027 [2024-07-15 18:36:58.661707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.027 [2024-07-15 18:36:58.661749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.027 [2024-07-15 18:36:58.661853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.027 [2024-07-15 18:36:58.661855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.992 18:36:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.992 18:36:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:11:42.992 18:36:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:43.929 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:43.930 malloc0 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.930 18:37:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:43.930 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.930 00:11:43.930 00:11:43.930 CUnit - A unit testing framework for C - Version 2.1-3 00:11:43.930 http://cunit.sourceforge.net/ 00:11:43.930 00:11:43.930 00:11:43.930 Suite: nvme_compliance 00:11:43.930 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 18:37:00.564658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:43.930 [2024-07-15 18:37:00.565984] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:43.930 [2024-07-15 18:37:00.566001] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:43.930 [2024-07-15 18:37:00.566007] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:43.930 [2024-07-15 18:37:00.569689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:43.930 passed 00:11:44.189 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 18:37:00.650239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.189 [2024-07-15 18:37:00.653256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.189 passed 00:11:44.189 Test: admin_identify_ns ...[2024-07-15 18:37:00.731763] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.189 [2024-07-15 18:37:00.791241] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:44.189 [2024-07-15 18:37:00.799232] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:44.189 [2024-07-15 18:37:00.820337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.189 passed 00:11:44.448 Test: admin_get_features_mandatory_features ...[2024-07-15 18:37:00.897370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.448 [2024-07-15 18:37:00.900387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.448 passed 00:11:44.448 Test: admin_get_features_optional_features ...[2024-07-15 18:37:00.978900] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.448 [2024-07-15 18:37:00.981915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.448 passed 00:11:44.448 Test: admin_set_features_number_of_queues ...[2024-07-15 18:37:01.059725] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.707 [2024-07-15 18:37:01.164322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.707 passed 00:11:44.707 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 18:37:01.239242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.707 [2024-07-15 18:37:01.242265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.707 passed 00:11:44.707 Test: admin_get_log_page_with_lpo ...[2024-07-15 18:37:01.320086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.707 [2024-07-15 18:37:01.387232] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:44.707 [2024-07-15 18:37:01.400297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.965 passed 00:11:44.965 Test: fabric_property_get ...[2024-07-15 18:37:01.477062] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.965 [2024-07-15 18:37:01.478293] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:44.965 [2024-07-15 18:37:01.480082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.965 passed 00:11:44.965 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 18:37:01.557570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:44.965 [2024-07-15 18:37:01.558795] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:44.965 [2024-07-15 18:37:01.560590] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:44.965 passed 00:11:44.965 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 18:37:01.638378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:45.224 [2024-07-15 18:37:01.724242] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:45.224 [2024-07-15 18:37:01.740235] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:45.224 [2024-07-15 18:37:01.745324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:45.224 passed 00:11:45.224 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 18:37:01.819452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:45.225 [2024-07-15 18:37:01.820678] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:45.225 [2024-07-15 18:37:01.822472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:45.225 passed 00:11:45.225 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 18:37:01.900256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:45.484 [2024-07-15 18:37:01.978242] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:45.484 [2024-07-15 18:37:02.002234] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:45.484 [2024-07-15 18:37:02.007319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:45.484 passed 00:11:45.484 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 18:37:02.084257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:45.484 [2024-07-15 18:37:02.085487] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:45.484 [2024-07-15 18:37:02.085512] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:45.484 [2024-07-15 18:37:02.087276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:45.484 passed 00:11:45.484 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 18:37:02.165135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:45.742 [2024-07-15 18:37:02.254232] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:45.742 [2024-07-15 18:37:02.262231] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:45.742 [2024-07-15 18:37:02.270229] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:45.742 [2024-07-15 18:37:02.278239] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:45.742 [2024-07-15 18:37:02.307315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:45.742 passed 00:11:45.742 Test: admin_create_io_sq_verify_pc ...[2024-07-15 18:37:02.387279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:45.742 [2024-07-15 18:37:02.403242] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:45.742 [2024-07-15 18:37:02.420483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:45.742 passed 00:11:46.001 Test: admin_create_io_qp_max_qps ...[2024-07-15 18:37:02.499033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:46.938 [2024-07-15 18:37:03.595234] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:47.505 [2024-07-15 18:37:04.007470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.505 passed 00:11:47.505 Test: admin_create_io_sq_shared_cq ...[2024-07-15 18:37:04.084422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.764 [2024-07-15 18:37:04.217241] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:47.764 [2024-07-15 18:37:04.254281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.764 passed 00:11:47.764 00:11:47.764 Run Summary: Type Total Ran Passed Failed Inactive 00:11:47.764 suites 1 1 n/a 0 0 00:11:47.764 tests 18 18 18 0 0 00:11:47.764 asserts 360 360 360 0 n/a 00:11:47.764 00:11:47.764 Elapsed time = 1.520 seconds 00:11:47.764 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1021163 00:11:47.764 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1021163 ']' 00:11:47.764 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1021163 00:11:47.764 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:11:47.764 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.764 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1021163 00:11:47.765 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:47.765 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:47.765 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1021163' 00:11:47.765 killing process with pid 1021163 00:11:47.765 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1021163 00:11:47.765 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1021163 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:48.024 00:11:48.024 real 0m6.183s 00:11:48.024 user 0m17.673s 00:11:48.024 sys 0m0.474s 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:48.024 ************************************ 00:11:48.024 END TEST nvmf_vfio_user_nvme_compliance 00:11:48.024 ************************************ 00:11:48.024 18:37:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.024 18:37:04 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:48.024 18:37:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.024 18:37:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.024 18:37:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.024 ************************************ 00:11:48.024 START TEST nvmf_vfio_user_fuzz 00:11:48.024 ************************************ 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:48.024 * Looking for test storage... 00:11:48.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1022246 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1022246' 00:11:48.024 Process pid: 1022246 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1022246 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1022246 ']' 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.024 18:37:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:48.961 18:37:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.961 18:37:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:48.961 18:37:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.899 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:50.158 malloc0 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:50.158 18:37:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:22.232 Fuzzing completed. Shutting down the fuzz application 00:12:22.232 00:12:22.232 Dumping successful admin opcodes: 00:12:22.232 8, 9, 10, 24, 00:12:22.232 Dumping successful io opcodes: 00:12:22.232 0, 00:12:22.232 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1144983, total successful commands: 4509, random_seed: 298537472 00:12:22.232 NS: 0x200003a1ef00 admin qp, Total commands completed: 282975, total successful commands: 2277, random_seed: 3659414656 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1022246 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1022246 ']' 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1022246 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1022246 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1022246' 00:12:22.232 killing process with pid 1022246 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1022246 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1022246 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:22.232 00:12:22.232 real 0m32.778s 00:12:22.232 user 0m34.858s 00:12:22.232 sys 0m26.757s 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.232 18:37:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:22.232 ************************************ 00:12:22.232 END TEST nvmf_vfio_user_fuzz 00:12:22.232 ************************************ 00:12:22.232 18:37:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:22.232 18:37:37 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:22.232 18:37:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:22.232 18:37:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.232 18:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.232 ************************************ 00:12:22.232 START TEST nvmf_host_management 00:12:22.232 ************************************ 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:22.232 * Looking for test storage... 00:12:22.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.232 18:37:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.233 18:37:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.476 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.476 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.476 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.476 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.477 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:12:26.477 00:12:26.477 --- 10.0.0.2 ping statistics --- 00:12:26.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.477 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:12:26.477 00:12:26.477 --- 10.0.0.1 ping statistics --- 00:12:26.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.477 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1030661 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1030661 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1030661 ']' 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.477 18:37:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.477 [2024-07-15 18:37:42.613916] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:12:26.477 [2024-07-15 18:37:42.613957] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.477 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.477 [2024-07-15 18:37:42.671450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.477 [2024-07-15 18:37:42.751005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.477 [2024-07-15 18:37:42.751044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.477 [2024-07-15 18:37:42.751051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.477 [2024-07-15 18:37:42.751057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.477 [2024-07-15 18:37:42.751063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.477 [2024-07-15 18:37:42.751160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.477 [2024-07-15 18:37:42.751248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.477 [2024-07-15 18:37:42.751354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.477 [2024-07-15 18:37:42.751355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:26.737 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.737 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:26.737 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.737 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.737 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.995 [2024-07-15 18:37:43.455008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.995 Malloc0 00:12:26.995 [2024-07-15 18:37:43.515022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1030738 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1030738 /var/tmp/bdevperf.sock 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1030738 ']' 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:26.995 { 00:12:26.995 "params": { 00:12:26.995 "name": "Nvme$subsystem", 00:12:26.995 "trtype": "$TEST_TRANSPORT", 00:12:26.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:26.995 "adrfam": "ipv4", 00:12:26.995 "trsvcid": "$NVMF_PORT", 00:12:26.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:26.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:26.995 "hdgst": ${hdgst:-false}, 00:12:26.995 "ddgst": ${ddgst:-false} 00:12:26.995 }, 00:12:26.995 "method": "bdev_nvme_attach_controller" 00:12:26.995 } 00:12:26.995 EOF 00:12:26.995 )") 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:26.995 18:37:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:26.995 "params": { 00:12:26.995 "name": "Nvme0", 00:12:26.995 "trtype": "tcp", 00:12:26.995 "traddr": "10.0.0.2", 00:12:26.995 "adrfam": "ipv4", 00:12:26.995 "trsvcid": "4420", 00:12:26.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:26.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:26.995 "hdgst": false, 00:12:26.995 "ddgst": false 00:12:26.995 }, 00:12:26.995 "method": "bdev_nvme_attach_controller" 00:12:26.995 }' 00:12:26.995 [2024-07-15 18:37:43.609268] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:12:26.995 [2024-07-15 18:37:43.609317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030738 ] 00:12:26.995 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.995 [2024-07-15 18:37:43.664141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.253 [2024-07-15 18:37:43.737231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.253 Running I/O for 10 seconds... 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.823 [2024-07-15 18:37:44.514353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 [2024-07-15 18:37:44.514487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e3460 is same with the state(5) to be set 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.823 18:37:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:27.823 [2024-07-15 18:37:44.527486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.823 [2024-07-15 18:37:44.527519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.823 [2024-07-15 18:37:44.527535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.823 [2024-07-15 18:37:44.527543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.823 [2024-07-15 18:37:44.527552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.527990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.527997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.528005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.528012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.824 [2024-07-15 18:37:44.528021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.824 [2024-07-15 18:37:44.528029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.825 [2024-07-15 18:37:44.528507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.825 [2024-07-15 18:37:44.528515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:27.826 [2024-07-15 18:37:44.528523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.826 [2024-07-15 18:37:44.528547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:12:28.085 [2024-07-15 18:37:44.528597] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f71b20 was disconnected and freed. reset controller. 00:12:28.085 [2024-07-15 18:37:44.528650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.085 [2024-07-15 18:37:44.528662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.085 [2024-07-15 18:37:44.528671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.085 [2024-07-15 18:37:44.528679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.085 [2024-07-15 18:37:44.528686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.085 [2024-07-15 18:37:44.528693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.085 [2024-07-15 18:37:44.528701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.085 [2024-07-15 18:37:44.528708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.085 [2024-07-15 18:37:44.528715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b60980 is same with the state(5) to be set 00:12:28.085 [2024-07-15 18:37:44.529598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:28.085 task offset: 0 on job bdev=Nvme0n1 fails 00:12:28.085 00:12:28.085 Latency(us) 00:12:28.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.085 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:28.085 Job: Nvme0n1 ended in about 0.59 seconds with error 00:12:28.085 Verification LBA range: start 0x0 length 0x400 00:12:28.085 Nvme0n1 : 0.59 1738.19 108.64 108.64 0.00 33949.63 1545.79 30317.52 00:12:28.085 =================================================================================================================== 00:12:28.085 Total : 1738.19 108.64 108.64 0.00 33949.63 1545.79 30317.52 00:12:28.085 [2024-07-15 18:37:44.531167] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:28.085 [2024-07-15 18:37:44.531180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60980 (9): Bad file descriptor 00:12:28.085 [2024-07-15 18:37:44.540350] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1030738 00:12:29.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1030738) - No such process 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:29.022 { 00:12:29.022 "params": { 00:12:29.022 "name": "Nvme$subsystem", 00:12:29.022 "trtype": "$TEST_TRANSPORT", 00:12:29.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:29.022 "adrfam": "ipv4", 00:12:29.022 "trsvcid": "$NVMF_PORT", 00:12:29.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:29.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:29.022 "hdgst": ${hdgst:-false}, 00:12:29.022 "ddgst": ${ddgst:-false} 00:12:29.022 }, 00:12:29.022 "method": "bdev_nvme_attach_controller" 00:12:29.022 } 00:12:29.022 EOF 00:12:29.022 )") 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:29.022 18:37:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:29.022 "params": { 00:12:29.022 "name": "Nvme0", 00:12:29.022 "trtype": "tcp", 00:12:29.022 "traddr": "10.0.0.2", 00:12:29.022 "adrfam": "ipv4", 00:12:29.022 "trsvcid": "4420", 00:12:29.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:29.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:29.022 "hdgst": false, 00:12:29.022 "ddgst": false 00:12:29.022 }, 00:12:29.022 "method": "bdev_nvme_attach_controller" 00:12:29.022 }' 00:12:29.022 [2024-07-15 18:37:45.582483] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:12:29.022 [2024-07-15 18:37:45.582531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031184 ] 00:12:29.022 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.022 [2024-07-15 18:37:45.636006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.022 [2024-07-15 18:37:45.706652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.282 Running I/O for 1 seconds... 00:12:30.217 00:12:30.217 Latency(us) 00:12:30.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.217 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:30.217 Verification LBA range: start 0x0 length 0x400 00:12:30.217 Nvme0n1 : 1.01 2044.35 127.77 0.00 0.00 30757.06 2521.71 29861.62 00:12:30.217 =================================================================================================================== 00:12:30.217 Total : 2044.35 127.77 0.00 0.00 30757.06 2521.71 29861.62 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.476 rmmod nvme_tcp 00:12:30.476 rmmod nvme_fabrics 00:12:30.476 rmmod nvme_keyring 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1030661 ']' 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1030661 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1030661 ']' 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1030661 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.476 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030661 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030661' 00:12:30.733 killing process with pid 1030661 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1030661 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1030661 00:12:30.733 [2024-07-15 18:37:47.366428] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.733 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.734 18:37:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.734 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.734 18:37:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.295 18:37:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.295 18:37:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:33.295 00:12:33.295 real 0m12.006s 00:12:33.295 user 0m22.222s 00:12:33.295 sys 0m4.923s 00:12:33.295 18:37:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.295 18:37:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:33.295 ************************************ 00:12:33.295 END TEST nvmf_host_management 00:12:33.295 ************************************ 00:12:33.295 18:37:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:33.295 18:37:49 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:33.295 18:37:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:33.295 18:37:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.295 18:37:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.295 ************************************ 00:12:33.295 START TEST nvmf_lvol 00:12:33.295 ************************************ 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:33.295 * Looking for test storage... 00:12:33.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.295 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.296 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.296 18:37:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.296 18:37:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.296 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.296 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.296 18:37:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.296 18:37:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:38.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:38.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:38.559 Found net devices under 0000:86:00.0: cvl_0_0 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:38.559 Found net devices under 0000:86:00.1: cvl_0_1 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.559 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:12:38.560 00:12:38.560 --- 10.0.0.2 ping statistics --- 00:12:38.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.560 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:12:38.560 00:12:38.560 --- 10.0.0.1 ping statistics --- 00:12:38.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.560 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1034821 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1034821 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1034821 ']' 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.560 18:37:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.560 [2024-07-15 18:37:54.944033] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:12:38.560 [2024-07-15 18:37:54.944080] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.560 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.560 [2024-07-15 18:37:55.001479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.560 [2024-07-15 18:37:55.082668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.560 [2024-07-15 18:37:55.082700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.560 [2024-07-15 18:37:55.082707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.560 [2024-07-15 18:37:55.082713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.560 [2024-07-15 18:37:55.082719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.560 [2024-07-15 18:37:55.082791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.560 [2024-07-15 18:37:55.082866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.560 [2024-07-15 18:37:55.082868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.127 18:37:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.127 18:37:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:39.127 18:37:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.127 18:37:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.127 18:37:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:39.127 18:37:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.127 18:37:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:39.386 [2024-07-15 18:37:55.947660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.386 18:37:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:39.645 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:39.645 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:39.904 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:39.904 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:39.904 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:40.163 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=168d04c1-0aeb-4794-a0e1-7ca5b966d3a6 00:12:40.163 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 168d04c1-0aeb-4794-a0e1-7ca5b966d3a6 lvol 20 00:12:40.422 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e7128f05-402b-447c-ab5f-7b3b44dc1725 00:12:40.422 18:37:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:40.422 18:37:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7128f05-402b-447c-ab5f-7b3b44dc1725 00:12:40.681 18:37:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:40.940 [2024-07-15 18:37:57.436069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.940 18:37:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:41.199 18:37:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1035306 00:12:41.199 18:37:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:41.199 18:37:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:41.199 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.135 18:37:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e7128f05-402b-447c-ab5f-7b3b44dc1725 MY_SNAPSHOT 00:12:42.394 18:37:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ea337ef6-d357-47a4-a20a-22f7899888b2 00:12:42.394 18:37:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e7128f05-402b-447c-ab5f-7b3b44dc1725 30 00:12:42.653 18:37:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ea337ef6-d357-47a4-a20a-22f7899888b2 MY_CLONE 00:12:42.911 18:37:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=293a2576-1fc0-4c8e-8114-fe96b7a0609f 00:12:42.911 18:37:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 293a2576-1fc0-4c8e-8114-fe96b7a0609f 00:12:43.169 18:37:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1035306 00:12:53.174 Initializing NVMe Controllers 00:12:53.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:53.174 Controller IO queue size 128, less than required. 00:12:53.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:53.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:53.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:53.174 Initialization complete. Launching workers. 00:12:53.174 ======================================================== 00:12:53.174 Latency(us) 00:12:53.174 Device Information : IOPS MiB/s Average min max 00:12:53.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11992.70 46.85 10677.00 1785.81 61005.34 00:12:53.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11930.90 46.61 10733.51 3681.49 53211.60 00:12:53.174 ======================================================== 00:12:53.174 Total : 23923.60 93.45 10705.18 1785.81 61005.34 00:12:53.174 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7128f05-402b-447c-ab5f-7b3b44dc1725 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 168d04c1-0aeb-4794-a0e1-7ca5b966d3a6 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:53.174 rmmod nvme_tcp 00:12:53.174 rmmod nvme_fabrics 00:12:53.174 rmmod nvme_keyring 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1034821 ']' 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1034821 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1034821 ']' 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1034821 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1034821 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1034821' 00:12:53.174 killing process with pid 1034821 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1034821 00:12:53.174 18:38:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1034821 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.174 18:38:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.552 18:38:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.552 00:12:54.552 real 0m21.634s 00:12:54.552 user 1m4.481s 00:12:54.552 sys 0m6.842s 00:12:54.552 18:38:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.552 18:38:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:54.552 ************************************ 00:12:54.552 END TEST nvmf_lvol 00:12:54.552 ************************************ 00:12:54.552 18:38:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:54.552 18:38:11 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:54.552 18:38:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.552 18:38:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.552 18:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.552 ************************************ 00:12:54.552 START TEST nvmf_lvs_grow 00:12:54.552 ************************************ 00:12:54.552 18:38:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:54.811 * Looking for test storage... 00:12:54.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.811 18:38:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:00.082 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.083 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.083 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:13:00.083 00:13:00.083 --- 10.0.0.2 ping statistics --- 00:13:00.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.083 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:13:00.083 00:13:00.083 --- 10.0.0.1 ping statistics --- 00:13:00.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.083 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1040574 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1040574 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1040574 ']' 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.083 18:38:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.083 [2024-07-15 18:38:16.709505] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:00.083 [2024-07-15 18:38:16.709545] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.083 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.083 [2024-07-15 18:38:16.766297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.342 [2024-07-15 18:38:16.844079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.342 [2024-07-15 18:38:16.844116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.342 [2024-07-15 18:38:16.844124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.342 [2024-07-15 18:38:16.844129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.342 [2024-07-15 18:38:16.844134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.342 [2024-07-15 18:38:16.844167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.907 18:38:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.908 18:38:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:00.908 18:38:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.908 18:38:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.908 18:38:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.908 18:38:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.908 18:38:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:01.165 [2024-07-15 18:38:17.695730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:01.165 ************************************ 00:13:01.165 START TEST lvs_grow_clean 00:13:01.165 ************************************ 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:01.165 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:01.423 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:01.423 18:38:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:01.423 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:01.423 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:01.423 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:01.680 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:01.680 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:01.680 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6a9690a7-504b-489b-9f19-946d4c375ffe lvol 150 00:13:01.937 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d4f7f767-7165-4a9e-91e0-912544b15ce6 00:13:01.937 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:01.937 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:01.937 [2024-07-15 18:38:18.620916] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:01.937 [2024-07-15 18:38:18.620971] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:01.937 true 00:13:01.937 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:01.937 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:02.194 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:02.194 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:02.452 18:38:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4f7f767-7165-4a9e-91e0-912544b15ce6 00:13:02.710 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:02.710 [2024-07-15 18:38:19.319013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.710 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1041068 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1041068 /var/tmp/bdevperf.sock 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1041068 ']' 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:02.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.968 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:02.968 [2024-07-15 18:38:19.520919] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:02.968 [2024-07-15 18:38:19.520963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041068 ] 00:13:02.968 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.968 [2024-07-15 18:38:19.573236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.968 [2024-07-15 18:38:19.645656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.226 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.226 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:03.226 18:38:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:03.483 Nvme0n1 00:13:03.483 18:38:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:03.483 [ 00:13:03.484 { 00:13:03.484 "name": "Nvme0n1", 00:13:03.484 "aliases": [ 00:13:03.484 "d4f7f767-7165-4a9e-91e0-912544b15ce6" 00:13:03.484 ], 00:13:03.484 "product_name": "NVMe disk", 00:13:03.484 "block_size": 4096, 00:13:03.484 "num_blocks": 38912, 00:13:03.484 "uuid": "d4f7f767-7165-4a9e-91e0-912544b15ce6", 00:13:03.484 "assigned_rate_limits": { 00:13:03.484 "rw_ios_per_sec": 0, 00:13:03.484 "rw_mbytes_per_sec": 0, 00:13:03.484 "r_mbytes_per_sec": 0, 00:13:03.484 "w_mbytes_per_sec": 0 00:13:03.484 }, 00:13:03.484 "claimed": false, 00:13:03.484 "zoned": false, 00:13:03.484 "supported_io_types": { 00:13:03.484 "read": true, 00:13:03.484 "write": true, 00:13:03.484 "unmap": true, 00:13:03.484 "flush": true, 00:13:03.484 "reset": true, 00:13:03.484 "nvme_admin": true, 00:13:03.484 "nvme_io": true, 00:13:03.484 "nvme_io_md": false, 00:13:03.484 "write_zeroes": true, 00:13:03.484 "zcopy": false, 00:13:03.484 "get_zone_info": false, 00:13:03.484 "zone_management": false, 00:13:03.484 "zone_append": false, 00:13:03.484 "compare": true, 00:13:03.484 "compare_and_write": true, 00:13:03.484 "abort": true, 00:13:03.484 "seek_hole": false, 00:13:03.484 "seek_data": false, 00:13:03.484 "copy": true, 00:13:03.484 "nvme_iov_md": false 00:13:03.484 }, 00:13:03.484 "memory_domains": [ 00:13:03.484 { 00:13:03.484 "dma_device_id": "system", 00:13:03.484 "dma_device_type": 1 00:13:03.484 } 00:13:03.484 ], 00:13:03.484 "driver_specific": { 00:13:03.484 "nvme": [ 00:13:03.484 { 00:13:03.484 "trid": { 00:13:03.484 "trtype": "TCP", 00:13:03.484 "adrfam": "IPv4", 00:13:03.484 "traddr": "10.0.0.2", 00:13:03.484 "trsvcid": "4420", 00:13:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:03.484 }, 00:13:03.484 "ctrlr_data": { 00:13:03.484 "cntlid": 1, 00:13:03.484 "vendor_id": "0x8086", 00:13:03.484 "model_number": "SPDK bdev Controller", 00:13:03.484 "serial_number": "SPDK0", 00:13:03.484 "firmware_revision": "24.09", 00:13:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:03.484 "oacs": { 00:13:03.484 "security": 0, 00:13:03.484 "format": 0, 00:13:03.484 "firmware": 0, 00:13:03.484 "ns_manage": 0 00:13:03.484 }, 00:13:03.484 "multi_ctrlr": true, 00:13:03.484 "ana_reporting": false 00:13:03.484 }, 00:13:03.484 "vs": { 00:13:03.484 "nvme_version": "1.3" 00:13:03.484 }, 00:13:03.484 "ns_data": { 00:13:03.484 "id": 1, 00:13:03.484 "can_share": true 00:13:03.484 } 00:13:03.484 } 00:13:03.484 ], 00:13:03.484 "mp_policy": "active_passive" 00:13:03.484 } 00:13:03.484 } 00:13:03.484 ] 00:13:03.741 18:38:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1041298 00:13:03.741 18:38:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:03.741 18:38:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:03.741 Running I/O for 10 seconds... 00:13:04.677 Latency(us) 00:13:04.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.677 Nvme0n1 : 1.00 23048.00 90.03 0.00 0.00 0.00 0.00 0.00 00:13:04.677 =================================================================================================================== 00:13:04.677 Total : 23048.00 90.03 0.00 0.00 0.00 0.00 0.00 00:13:04.677 00:13:05.614 18:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:05.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.614 Nvme0n1 : 2.00 23225.00 90.72 0.00 0.00 0.00 0.00 0.00 00:13:05.614 =================================================================================================================== 00:13:05.614 Total : 23225.00 90.72 0.00 0.00 0.00 0.00 0.00 00:13:05.614 00:13:05.873 true 00:13:05.873 18:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:05.873 18:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:05.873 18:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:05.873 18:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:05.873 18:38:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1041298 00:13:06.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.808 Nvme0n1 : 3.00 23282.00 90.95 0.00 0.00 0.00 0.00 0.00 00:13:06.808 =================================================================================================================== 00:13:06.808 Total : 23282.00 90.95 0.00 0.00 0.00 0.00 0.00 00:13:06.808 00:13:07.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.745 Nvme0n1 : 4.00 23343.50 91.19 0.00 0.00 0.00 0.00 0.00 00:13:07.745 =================================================================================================================== 00:13:07.745 Total : 23343.50 91.19 0.00 0.00 0.00 0.00 0.00 00:13:07.745 00:13:08.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.682 Nvme0n1 : 5.00 23398.40 91.40 0.00 0.00 0.00 0.00 0.00 00:13:08.682 =================================================================================================================== 00:13:08.682 Total : 23398.40 91.40 0.00 0.00 0.00 0.00 0.00 00:13:08.682 00:13:09.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.617 Nvme0n1 : 6.00 23418.33 91.48 0.00 0.00 0.00 0.00 0.00 00:13:09.617 =================================================================================================================== 00:13:09.617 Total : 23418.33 91.48 0.00 0.00 0.00 0.00 0.00 00:13:09.617 00:13:10.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.591 Nvme0n1 : 7.00 23447.57 91.59 0.00 0.00 0.00 0.00 0.00 00:13:10.591 =================================================================================================================== 00:13:10.591 Total : 23447.57 91.59 0.00 0.00 0.00 0.00 0.00 00:13:10.591 00:13:11.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.963 Nvme0n1 : 8.00 23465.88 91.66 0.00 0.00 0.00 0.00 0.00 00:13:11.963 =================================================================================================================== 00:13:11.963 Total : 23465.88 91.66 0.00 0.00 0.00 0.00 0.00 00:13:11.963 00:13:12.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.949 Nvme0n1 : 9.00 23475.44 91.70 0.00 0.00 0.00 0.00 0.00 00:13:12.949 =================================================================================================================== 00:13:12.949 Total : 23475.44 91.70 0.00 0.00 0.00 0.00 0.00 00:13:12.949 00:13:13.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.883 Nvme0n1 : 10.00 23470.30 91.68 0.00 0.00 0.00 0.00 0.00 00:13:13.883 =================================================================================================================== 00:13:13.883 Total : 23470.30 91.68 0.00 0.00 0.00 0.00 0.00 00:13:13.883 00:13:13.883 00:13:13.883 Latency(us) 00:13:13.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.883 Nvme0n1 : 10.00 23473.49 91.69 0.00 0.00 5449.74 1581.41 9972.87 00:13:13.883 =================================================================================================================== 00:13:13.883 Total : 23473.49 91.69 0.00 0.00 5449.74 1581.41 9972.87 00:13:13.883 0 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1041068 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1041068 ']' 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1041068 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041068 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041068' 00:13:13.883 killing process with pid 1041068 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1041068 00:13:13.883 Received shutdown signal, test time was about 10.000000 seconds 00:13:13.883 00:13:13.883 Latency(us) 00:13:13.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.883 =================================================================================================================== 00:13:13.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1041068 00:13:13.883 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.142 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:14.401 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:14.401 18:38:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:14.401 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:14.401 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:14.401 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:14.660 [2024-07-15 18:38:31.261532] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:14.660 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:14.919 request: 00:13:14.919 { 00:13:14.919 "uuid": "6a9690a7-504b-489b-9f19-946d4c375ffe", 00:13:14.919 "method": "bdev_lvol_get_lvstores", 00:13:14.919 "req_id": 1 00:13:14.919 } 00:13:14.919 Got JSON-RPC error response 00:13:14.919 response: 00:13:14.919 { 00:13:14.919 "code": -19, 00:13:14.919 "message": "No such device" 00:13:14.919 } 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:14.919 aio_bdev 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d4f7f767-7165-4a9e-91e0-912544b15ce6 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d4f7f767-7165-4a9e-91e0-912544b15ce6 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:14.919 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:15.179 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d4f7f767-7165-4a9e-91e0-912544b15ce6 -t 2000 00:13:15.438 [ 00:13:15.438 { 00:13:15.438 "name": "d4f7f767-7165-4a9e-91e0-912544b15ce6", 00:13:15.438 "aliases": [ 00:13:15.438 "lvs/lvol" 00:13:15.438 ], 00:13:15.438 "product_name": "Logical Volume", 00:13:15.438 "block_size": 4096, 00:13:15.438 "num_blocks": 38912, 00:13:15.438 "uuid": "d4f7f767-7165-4a9e-91e0-912544b15ce6", 00:13:15.438 "assigned_rate_limits": { 00:13:15.438 "rw_ios_per_sec": 0, 00:13:15.438 "rw_mbytes_per_sec": 0, 00:13:15.438 "r_mbytes_per_sec": 0, 00:13:15.438 "w_mbytes_per_sec": 0 00:13:15.438 }, 00:13:15.438 "claimed": false, 00:13:15.438 "zoned": false, 00:13:15.438 "supported_io_types": { 00:13:15.438 "read": true, 00:13:15.438 "write": true, 00:13:15.438 "unmap": true, 00:13:15.438 "flush": false, 00:13:15.438 "reset": true, 00:13:15.438 "nvme_admin": false, 00:13:15.438 "nvme_io": false, 00:13:15.438 "nvme_io_md": false, 00:13:15.438 "write_zeroes": true, 00:13:15.438 "zcopy": false, 00:13:15.438 "get_zone_info": false, 00:13:15.438 "zone_management": false, 00:13:15.438 "zone_append": false, 00:13:15.438 "compare": false, 00:13:15.438 "compare_and_write": false, 00:13:15.438 "abort": false, 00:13:15.438 "seek_hole": true, 00:13:15.438 "seek_data": true, 00:13:15.438 "copy": false, 00:13:15.438 "nvme_iov_md": false 00:13:15.438 }, 00:13:15.438 "driver_specific": { 00:13:15.438 "lvol": { 00:13:15.438 "lvol_store_uuid": "6a9690a7-504b-489b-9f19-946d4c375ffe", 00:13:15.438 "base_bdev": "aio_bdev", 00:13:15.438 "thin_provision": false, 00:13:15.438 "num_allocated_clusters": 38, 00:13:15.438 "snapshot": false, 00:13:15.438 "clone": false, 00:13:15.438 "esnap_clone": false 00:13:15.438 } 00:13:15.438 } 00:13:15.438 } 00:13:15.438 ] 00:13:15.438 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:15.438 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:15.438 18:38:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:15.438 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:15.438 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:15.438 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:15.697 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:15.697 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d4f7f767-7165-4a9e-91e0-912544b15ce6 00:13:15.956 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a9690a7-504b-489b-9f19-946d4c375ffe 00:13:15.956 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:16.214 00:13:16.214 real 0m15.087s 00:13:16.214 user 0m14.753s 00:13:16.214 sys 0m1.291s 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:16.214 ************************************ 00:13:16.214 END TEST lvs_grow_clean 00:13:16.214 ************************************ 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:16.214 ************************************ 00:13:16.214 START TEST lvs_grow_dirty 00:13:16.214 ************************************ 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:16.214 18:38:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:16.473 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:16.473 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:16.731 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:16.731 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:16.731 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:16.989 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:16.989 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:16.989 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ee9abb48-8b04-49ec-b43a-24bc7e21419f lvol 150 00:13:16.989 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b768cfa1-90a0-4389-b99a-78776d65704d 00:13:16.989 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:16.989 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:17.249 [2024-07-15 18:38:33.788955] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:17.249 [2024-07-15 18:38:33.789010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:17.249 true 00:13:17.249 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:17.249 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:17.508 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:17.508 18:38:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:17.508 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b768cfa1-90a0-4389-b99a-78776d65704d 00:13:17.766 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:18.026 [2024-07-15 18:38:34.479012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1043658 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1043658 /var/tmp/bdevperf.sock 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1043658 ']' 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.026 18:38:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:18.026 [2024-07-15 18:38:34.685240] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:18.026 [2024-07-15 18:38:34.685286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043658 ] 00:13:18.026 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.284 [2024-07-15 18:38:34.740015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.284 [2024-07-15 18:38:34.818293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.854 18:38:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.854 18:38:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:18.854 18:38:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:19.114 Nvme0n1 00:13:19.114 18:38:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:19.374 [ 00:13:19.374 { 00:13:19.374 "name": "Nvme0n1", 00:13:19.374 "aliases": [ 00:13:19.374 "b768cfa1-90a0-4389-b99a-78776d65704d" 00:13:19.374 ], 00:13:19.374 "product_name": "NVMe disk", 00:13:19.374 "block_size": 4096, 00:13:19.374 "num_blocks": 38912, 00:13:19.374 "uuid": "b768cfa1-90a0-4389-b99a-78776d65704d", 00:13:19.374 "assigned_rate_limits": { 00:13:19.374 "rw_ios_per_sec": 0, 00:13:19.374 "rw_mbytes_per_sec": 0, 00:13:19.374 "r_mbytes_per_sec": 0, 00:13:19.374 "w_mbytes_per_sec": 0 00:13:19.374 }, 00:13:19.374 "claimed": false, 00:13:19.374 "zoned": false, 00:13:19.374 "supported_io_types": { 00:13:19.374 "read": true, 00:13:19.374 "write": true, 00:13:19.374 "unmap": true, 00:13:19.374 "flush": true, 00:13:19.374 "reset": true, 00:13:19.374 "nvme_admin": true, 00:13:19.374 "nvme_io": true, 00:13:19.374 "nvme_io_md": false, 00:13:19.374 "write_zeroes": true, 00:13:19.374 "zcopy": false, 00:13:19.374 "get_zone_info": false, 00:13:19.374 "zone_management": false, 00:13:19.374 "zone_append": false, 00:13:19.374 "compare": true, 00:13:19.374 "compare_and_write": true, 00:13:19.374 "abort": true, 00:13:19.374 "seek_hole": false, 00:13:19.374 "seek_data": false, 00:13:19.374 "copy": true, 00:13:19.374 "nvme_iov_md": false 00:13:19.374 }, 00:13:19.374 "memory_domains": [ 00:13:19.374 { 00:13:19.374 "dma_device_id": "system", 00:13:19.374 "dma_device_type": 1 00:13:19.374 } 00:13:19.374 ], 00:13:19.374 "driver_specific": { 00:13:19.374 "nvme": [ 00:13:19.374 { 00:13:19.374 "trid": { 00:13:19.374 "trtype": "TCP", 00:13:19.374 "adrfam": "IPv4", 00:13:19.374 "traddr": "10.0.0.2", 00:13:19.374 "trsvcid": "4420", 00:13:19.374 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:19.374 }, 00:13:19.374 "ctrlr_data": { 00:13:19.374 "cntlid": 1, 00:13:19.374 "vendor_id": "0x8086", 00:13:19.374 "model_number": "SPDK bdev Controller", 00:13:19.374 "serial_number": "SPDK0", 00:13:19.374 "firmware_revision": "24.09", 00:13:19.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:19.374 "oacs": { 00:13:19.374 "security": 0, 00:13:19.374 "format": 0, 00:13:19.374 "firmware": 0, 00:13:19.374 "ns_manage": 0 00:13:19.374 }, 00:13:19.374 "multi_ctrlr": true, 00:13:19.374 "ana_reporting": false 00:13:19.374 }, 00:13:19.374 "vs": { 00:13:19.374 "nvme_version": "1.3" 00:13:19.374 }, 00:13:19.374 "ns_data": { 00:13:19.374 "id": 1, 00:13:19.374 "can_share": true 00:13:19.374 } 00:13:19.374 } 00:13:19.374 ], 00:13:19.374 "mp_policy": "active_passive" 00:13:19.374 } 00:13:19.374 } 00:13:19.374 ] 00:13:19.374 18:38:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1043890 00:13:19.374 18:38:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:19.374 18:38:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:19.374 Running I/O for 10 seconds... 00:13:20.772 Latency(us) 00:13:20.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.772 Nvme0n1 : 1.00 22150.00 86.52 0.00 0.00 0.00 0.00 0.00 00:13:20.772 =================================================================================================================== 00:13:20.772 Total : 22150.00 86.52 0.00 0.00 0.00 0.00 0.00 00:13:20.772 00:13:21.340 18:38:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:21.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.598 Nvme0n1 : 2.00 22295.00 87.09 0.00 0.00 0.00 0.00 0.00 00:13:21.598 =================================================================================================================== 00:13:21.598 Total : 22295.00 87.09 0.00 0.00 0.00 0.00 0.00 00:13:21.598 00:13:21.598 true 00:13:21.598 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:21.598 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:21.856 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:21.856 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:21.856 18:38:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1043890 00:13:22.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.424 Nvme0n1 : 3.00 22351.33 87.31 0.00 0.00 0.00 0.00 0.00 00:13:22.424 =================================================================================================================== 00:13:22.424 Total : 22351.33 87.31 0.00 0.00 0.00 0.00 0.00 00:13:22.424 00:13:23.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.361 Nvme0n1 : 4.00 22377.50 87.41 0.00 0.00 0.00 0.00 0.00 00:13:23.361 =================================================================================================================== 00:13:23.361 Total : 22377.50 87.41 0.00 0.00 0.00 0.00 0.00 00:13:23.361 00:13:24.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.739 Nvme0n1 : 5.00 22386.80 87.45 0.00 0.00 0.00 0.00 0.00 00:13:24.739 =================================================================================================================== 00:13:24.739 Total : 22386.80 87.45 0.00 0.00 0.00 0.00 0.00 00:13:24.739 00:13:25.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.676 Nvme0n1 : 6.00 22435.67 87.64 0.00 0.00 0.00 0.00 0.00 00:13:25.676 =================================================================================================================== 00:13:25.676 Total : 22435.67 87.64 0.00 0.00 0.00 0.00 0.00 00:13:25.676 00:13:26.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.649 Nvme0n1 : 7.00 22464.86 87.75 0.00 0.00 0.00 0.00 0.00 00:13:26.649 =================================================================================================================== 00:13:26.649 Total : 22464.86 87.75 0.00 0.00 0.00 0.00 0.00 00:13:26.649 00:13:27.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.587 Nvme0n1 : 8.00 22495.75 87.87 0.00 0.00 0.00 0.00 0.00 00:13:27.587 =================================================================================================================== 00:13:27.587 Total : 22495.75 87.87 0.00 0.00 0.00 0.00 0.00 00:13:27.587 00:13:28.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.526 Nvme0n1 : 9.00 22520.67 87.97 0.00 0.00 0.00 0.00 0.00 00:13:28.526 =================================================================================================================== 00:13:28.526 Total : 22520.67 87.97 0.00 0.00 0.00 0.00 0.00 00:13:28.526 00:13:29.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.461 Nvme0n1 : 10.00 22544.60 88.06 0.00 0.00 0.00 0.00 0.00 00:13:29.461 =================================================================================================================== 00:13:29.461 Total : 22544.60 88.06 0.00 0.00 0.00 0.00 0.00 00:13:29.461 00:13:29.461 00:13:29.461 Latency(us) 00:13:29.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.461 Nvme0n1 : 10.01 22544.37 88.06 0.00 0.00 5673.57 4331.07 12822.26 00:13:29.461 =================================================================================================================== 00:13:29.461 Total : 22544.37 88.06 0.00 0.00 5673.57 4331.07 12822.26 00:13:29.461 0 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1043658 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1043658 ']' 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1043658 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043658 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043658' 00:13:29.461 killing process with pid 1043658 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1043658 00:13:29.461 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.461 00:13:29.461 Latency(us) 00:13:29.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.461 =================================================================================================================== 00:13:29.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.461 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1043658 00:13:29.720 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.980 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:29.980 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:29.980 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1040574 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1040574 00:13:30.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1040574 Killed "${NVMF_APP[@]}" "$@" 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1045736 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1045736 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1045736 ']' 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.239 18:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.239 [2024-07-15 18:38:46.935786] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:30.239 [2024-07-15 18:38:46.935832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.499 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.499 [2024-07-15 18:38:46.995066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.499 [2024-07-15 18:38:47.073516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.499 [2024-07-15 18:38:47.073548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.499 [2024-07-15 18:38:47.073555] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.499 [2024-07-15 18:38:47.073564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.499 [2024-07-15 18:38:47.073569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.499 [2024-07-15 18:38:47.073603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.067 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.067 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:31.067 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.067 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.067 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:31.327 [2024-07-15 18:38:47.938751] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:31.327 [2024-07-15 18:38:47.938843] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:31.327 [2024-07-15 18:38:47.938868] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b768cfa1-90a0-4389-b99a-78776d65704d 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b768cfa1-90a0-4389-b99a-78776d65704d 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.327 18:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:31.586 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b768cfa1-90a0-4389-b99a-78776d65704d -t 2000 00:13:31.845 [ 00:13:31.845 { 00:13:31.845 "name": "b768cfa1-90a0-4389-b99a-78776d65704d", 00:13:31.845 "aliases": [ 00:13:31.845 "lvs/lvol" 00:13:31.845 ], 00:13:31.845 "product_name": "Logical Volume", 00:13:31.845 "block_size": 4096, 00:13:31.845 "num_blocks": 38912, 00:13:31.845 "uuid": "b768cfa1-90a0-4389-b99a-78776d65704d", 00:13:31.845 "assigned_rate_limits": { 00:13:31.845 "rw_ios_per_sec": 0, 00:13:31.845 "rw_mbytes_per_sec": 0, 00:13:31.845 "r_mbytes_per_sec": 0, 00:13:31.845 "w_mbytes_per_sec": 0 00:13:31.845 }, 00:13:31.845 "claimed": false, 00:13:31.845 "zoned": false, 00:13:31.845 "supported_io_types": { 00:13:31.845 "read": true, 00:13:31.845 "write": true, 00:13:31.845 "unmap": true, 00:13:31.845 "flush": false, 00:13:31.845 "reset": true, 00:13:31.845 "nvme_admin": false, 00:13:31.845 "nvme_io": false, 00:13:31.845 "nvme_io_md": false, 00:13:31.845 "write_zeroes": true, 00:13:31.845 "zcopy": false, 00:13:31.845 "get_zone_info": false, 00:13:31.845 "zone_management": false, 00:13:31.846 "zone_append": false, 00:13:31.846 "compare": false, 00:13:31.846 "compare_and_write": false, 00:13:31.846 "abort": false, 00:13:31.846 "seek_hole": true, 00:13:31.846 "seek_data": true, 00:13:31.846 "copy": false, 00:13:31.846 "nvme_iov_md": false 00:13:31.846 }, 00:13:31.846 "driver_specific": { 00:13:31.846 "lvol": { 00:13:31.846 "lvol_store_uuid": "ee9abb48-8b04-49ec-b43a-24bc7e21419f", 00:13:31.846 "base_bdev": "aio_bdev", 00:13:31.846 "thin_provision": false, 00:13:31.846 "num_allocated_clusters": 38, 00:13:31.846 "snapshot": false, 00:13:31.846 "clone": false, 00:13:31.846 "esnap_clone": false 00:13:31.846 } 00:13:31.846 } 00:13:31.846 } 00:13:31.846 ] 00:13:31.846 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:31.846 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:31.846 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:31.846 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:31.846 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:31.846 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:32.104 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:32.104 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:32.363 [2024-07-15 18:38:48.815329] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:32.363 18:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:32.363 request: 00:13:32.363 { 00:13:32.363 "uuid": "ee9abb48-8b04-49ec-b43a-24bc7e21419f", 00:13:32.363 "method": "bdev_lvol_get_lvstores", 00:13:32.363 "req_id": 1 00:13:32.363 } 00:13:32.363 Got JSON-RPC error response 00:13:32.363 response: 00:13:32.363 { 00:13:32.363 "code": -19, 00:13:32.363 "message": "No such device" 00:13:32.363 } 00:13:32.363 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:32.363 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:32.363 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:32.363 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:32.363 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:32.622 aio_bdev 00:13:32.622 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b768cfa1-90a0-4389-b99a-78776d65704d 00:13:32.622 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b768cfa1-90a0-4389-b99a-78776d65704d 00:13:32.622 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:32.622 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:32.622 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:32.622 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:32.622 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:32.882 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b768cfa1-90a0-4389-b99a-78776d65704d -t 2000 00:13:32.882 [ 00:13:32.882 { 00:13:32.882 "name": "b768cfa1-90a0-4389-b99a-78776d65704d", 00:13:32.882 "aliases": [ 00:13:32.882 "lvs/lvol" 00:13:32.882 ], 00:13:32.882 "product_name": "Logical Volume", 00:13:32.882 "block_size": 4096, 00:13:32.882 "num_blocks": 38912, 00:13:32.882 "uuid": "b768cfa1-90a0-4389-b99a-78776d65704d", 00:13:32.882 "assigned_rate_limits": { 00:13:32.882 "rw_ios_per_sec": 0, 00:13:32.882 "rw_mbytes_per_sec": 0, 00:13:32.882 "r_mbytes_per_sec": 0, 00:13:32.882 "w_mbytes_per_sec": 0 00:13:32.882 }, 00:13:32.882 "claimed": false, 00:13:32.882 "zoned": false, 00:13:32.882 "supported_io_types": { 00:13:32.882 "read": true, 00:13:32.882 "write": true, 00:13:32.882 "unmap": true, 00:13:32.882 "flush": false, 00:13:32.882 "reset": true, 00:13:32.882 "nvme_admin": false, 00:13:32.882 "nvme_io": false, 00:13:32.882 "nvme_io_md": false, 00:13:32.882 "write_zeroes": true, 00:13:32.882 "zcopy": false, 00:13:32.882 "get_zone_info": false, 00:13:32.882 "zone_management": false, 00:13:32.882 "zone_append": false, 00:13:32.882 "compare": false, 00:13:32.882 "compare_and_write": false, 00:13:32.882 "abort": false, 00:13:32.882 "seek_hole": true, 00:13:32.882 "seek_data": true, 00:13:32.882 "copy": false, 00:13:32.882 "nvme_iov_md": false 00:13:32.882 }, 00:13:32.882 "driver_specific": { 00:13:32.882 "lvol": { 00:13:32.882 "lvol_store_uuid": "ee9abb48-8b04-49ec-b43a-24bc7e21419f", 00:13:32.882 "base_bdev": "aio_bdev", 00:13:32.882 "thin_provision": false, 00:13:32.882 "num_allocated_clusters": 38, 00:13:32.882 "snapshot": false, 00:13:32.882 "clone": false, 00:13:32.882 "esnap_clone": false 00:13:32.882 } 00:13:32.882 } 00:13:32.882 } 00:13:32.882 ] 00:13:32.882 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:32.882 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:32.882 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:33.141 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:33.141 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:33.141 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:33.401 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:33.401 18:38:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b768cfa1-90a0-4389-b99a-78776d65704d 00:13:33.401 18:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee9abb48-8b04-49ec-b43a-24bc7e21419f 00:13:33.660 18:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:33.919 00:13:33.919 real 0m17.565s 00:13:33.919 user 0m44.130s 00:13:33.919 sys 0m3.942s 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:33.919 ************************************ 00:13:33.919 END TEST lvs_grow_dirty 00:13:33.919 ************************************ 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:33.919 nvmf_trace.0 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.919 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:33.919 rmmod nvme_tcp 00:13:33.919 rmmod nvme_fabrics 00:13:33.919 rmmod nvme_keyring 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1045736 ']' 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1045736 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1045736 ']' 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1045736 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.920 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1045736 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1045736' 00:13:34.179 killing process with pid 1045736 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1045736 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1045736 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.179 18:38:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.715 18:38:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.715 00:13:36.715 real 0m41.657s 00:13:36.715 user 1m4.664s 00:13:36.715 sys 0m9.620s 00:13:36.716 18:38:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.716 18:38:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.716 ************************************ 00:13:36.716 END TEST nvmf_lvs_grow 00:13:36.716 ************************************ 00:13:36.716 18:38:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:36.716 18:38:52 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:36.716 18:38:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.716 18:38:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.716 18:38:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.716 ************************************ 00:13:36.716 START TEST nvmf_bdev_io_wait 00:13:36.716 ************************************ 00:13:36.716 18:38:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:36.716 * Looking for test storage... 00:13:36.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.716 18:38:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:41.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:41.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:41.988 Found net devices under 0000:86:00.0: cvl_0_0 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:41.988 Found net devices under 0000:86:00.1: cvl_0_1 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.988 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:13:41.989 00:13:41.989 --- 10.0.0.2 ping statistics --- 00:13:41.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.989 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:13:41.989 00:13:41.989 --- 10.0.0.1 ping statistics --- 00:13:41.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.989 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1049782 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1049782 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1049782 ']' 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.989 18:38:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:41.989 [2024-07-15 18:38:58.491587] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:41.989 [2024-07-15 18:38:58.491633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.989 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.989 [2024-07-15 18:38:58.549734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.989 [2024-07-15 18:38:58.631045] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.989 [2024-07-15 18:38:58.631082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.989 [2024-07-15 18:38:58.631089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.989 [2024-07-15 18:38:58.631095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.989 [2024-07-15 18:38:58.631101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.989 [2024-07-15 18:38:58.631142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.989 [2024-07-15 18:38:58.631158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.989 [2024-07-15 18:38:58.631248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.989 [2024-07-15 18:38:58.631249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.927 [2024-07-15 18:38:59.412757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.927 Malloc0 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.927 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.928 [2024-07-15 18:38:59.471827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1050031 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1050033 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.928 { 00:13:42.928 "params": { 00:13:42.928 "name": "Nvme$subsystem", 00:13:42.928 "trtype": "$TEST_TRANSPORT", 00:13:42.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.928 "adrfam": "ipv4", 00:13:42.928 "trsvcid": "$NVMF_PORT", 00:13:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.928 "hdgst": ${hdgst:-false}, 00:13:42.928 "ddgst": ${ddgst:-false} 00:13:42.928 }, 00:13:42.928 "method": "bdev_nvme_attach_controller" 00:13:42.928 } 00:13:42.928 EOF 00:13:42.928 )") 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1050035 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.928 { 00:13:42.928 "params": { 00:13:42.928 "name": "Nvme$subsystem", 00:13:42.928 "trtype": "$TEST_TRANSPORT", 00:13:42.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.928 "adrfam": "ipv4", 00:13:42.928 "trsvcid": "$NVMF_PORT", 00:13:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.928 "hdgst": ${hdgst:-false}, 00:13:42.928 "ddgst": ${ddgst:-false} 00:13:42.928 }, 00:13:42.928 "method": "bdev_nvme_attach_controller" 00:13:42.928 } 00:13:42.928 EOF 00:13:42.928 )") 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1050038 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.928 { 00:13:42.928 "params": { 00:13:42.928 "name": "Nvme$subsystem", 00:13:42.928 "trtype": "$TEST_TRANSPORT", 00:13:42.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.928 "adrfam": "ipv4", 00:13:42.928 "trsvcid": "$NVMF_PORT", 00:13:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.928 "hdgst": ${hdgst:-false}, 00:13:42.928 "ddgst": ${ddgst:-false} 00:13:42.928 }, 00:13:42.928 "method": "bdev_nvme_attach_controller" 00:13:42.928 } 00:13:42.928 EOF 00:13:42.928 )") 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.928 { 00:13:42.928 "params": { 00:13:42.928 "name": "Nvme$subsystem", 00:13:42.928 "trtype": "$TEST_TRANSPORT", 00:13:42.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.928 "adrfam": "ipv4", 00:13:42.928 "trsvcid": "$NVMF_PORT", 00:13:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.928 "hdgst": ${hdgst:-false}, 00:13:42.928 "ddgst": ${ddgst:-false} 00:13:42.928 }, 00:13:42.928 "method": "bdev_nvme_attach_controller" 00:13:42.928 } 00:13:42.928 EOF 00:13:42.928 )") 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1050031 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:42.928 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.928 "params": { 00:13:42.928 "name": "Nvme1", 00:13:42.928 "trtype": "tcp", 00:13:42.928 "traddr": "10.0.0.2", 00:13:42.928 "adrfam": "ipv4", 00:13:42.928 "trsvcid": "4420", 00:13:42.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.929 "hdgst": false, 00:13:42.929 "ddgst": false 00:13:42.929 }, 00:13:42.929 "method": "bdev_nvme_attach_controller" 00:13:42.929 }' 00:13:42.929 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:42.929 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:42.929 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.929 "params": { 00:13:42.929 "name": "Nvme1", 00:13:42.929 "trtype": "tcp", 00:13:42.929 "traddr": "10.0.0.2", 00:13:42.929 "adrfam": "ipv4", 00:13:42.929 "trsvcid": "4420", 00:13:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.929 "hdgst": false, 00:13:42.929 "ddgst": false 00:13:42.929 }, 00:13:42.929 "method": "bdev_nvme_attach_controller" 00:13:42.929 }' 00:13:42.929 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:42.929 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.929 "params": { 00:13:42.929 "name": "Nvme1", 00:13:42.929 "trtype": "tcp", 00:13:42.929 "traddr": "10.0.0.2", 00:13:42.929 "adrfam": "ipv4", 00:13:42.929 "trsvcid": "4420", 00:13:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.929 "hdgst": false, 00:13:42.929 "ddgst": false 00:13:42.929 }, 00:13:42.929 "method": "bdev_nvme_attach_controller" 00:13:42.929 }' 00:13:42.929 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:42.929 18:38:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.929 "params": { 00:13:42.929 "name": "Nvme1", 00:13:42.929 "trtype": "tcp", 00:13:42.929 "traddr": "10.0.0.2", 00:13:42.929 "adrfam": "ipv4", 00:13:42.929 "trsvcid": "4420", 00:13:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.929 "hdgst": false, 00:13:42.929 "ddgst": false 00:13:42.929 }, 00:13:42.929 "method": "bdev_nvme_attach_controller" 00:13:42.929 }' 00:13:42.929 [2024-07-15 18:38:59.520280] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:42.929 [2024-07-15 18:38:59.520329] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:42.929 [2024-07-15 18:38:59.520383] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:42.929 [2024-07-15 18:38:59.520423] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:42.929 [2024-07-15 18:38:59.521058] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:42.929 [2024-07-15 18:38:59.521093] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:42.929 [2024-07-15 18:38:59.525079] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:42.929 [2024-07-15 18:38:59.525123] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:42.929 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.188 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.188 [2024-07-15 18:38:59.699426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.188 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.188 [2024-07-15 18:38:59.778386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:43.188 [2024-07-15 18:38:59.797859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.188 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.188 [2024-07-15 18:38:59.873572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:43.188 [2024-07-15 18:38:59.892808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.447 [2024-07-15 18:38:59.938207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.447 [2024-07-15 18:38:59.980243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:43.447 [2024-07-15 18:39:00.015106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:43.447 Running I/O for 1 seconds... 00:13:43.707 Running I/O for 1 seconds... 00:13:43.707 Running I/O for 1 seconds... 00:13:43.707 Running I/O for 1 seconds... 00:13:44.668 00:13:44.668 Latency(us) 00:13:44.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.668 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:44.668 Nvme1n1 : 1.01 12176.49 47.56 0.00 0.00 10472.85 6297.15 17324.30 00:13:44.668 =================================================================================================================== 00:13:44.668 Total : 12176.49 47.56 0.00 0.00 10472.85 6297.15 17324.30 00:13:44.668 00:13:44.668 Latency(us) 00:13:44.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.668 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:44.668 Nvme1n1 : 1.01 11486.47 44.87 0.00 0.00 11108.25 1923.34 14417.92 00:13:44.668 =================================================================================================================== 00:13:44.668 Total : 11486.47 44.87 0.00 0.00 11108.25 1923.34 14417.92 00:13:44.668 00:13:44.668 Latency(us) 00:13:44.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.668 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:44.668 Nvme1n1 : 1.00 244732.45 955.99 0.00 0.00 520.72 209.25 637.55 00:13:44.668 =================================================================================================================== 00:13:44.668 Total : 244732.45 955.99 0.00 0.00 520.72 209.25 637.55 00:13:44.668 00:13:44.668 Latency(us) 00:13:44.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.668 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:44.668 Nvme1n1 : 1.00 10296.27 40.22 0.00 0.00 12396.32 4900.95 25074.64 00:13:44.668 =================================================================================================================== 00:13:44.668 Total : 10296.27 40.22 0.00 0.00 12396.32 4900.95 25074.64 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1050033 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1050035 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1050038 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.940 rmmod nvme_tcp 00:13:44.940 rmmod nvme_fabrics 00:13:44.940 rmmod nvme_keyring 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1049782 ']' 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1049782 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1049782 ']' 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1049782 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1049782 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1049782' 00:13:44.940 killing process with pid 1049782 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1049782 00:13:44.940 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1049782 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.199 18:39:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.733 18:39:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:47.733 00:13:47.733 real 0m10.891s 00:13:47.733 user 0m19.519s 00:13:47.733 sys 0m5.768s 00:13:47.733 18:39:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.733 18:39:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.733 ************************************ 00:13:47.733 END TEST nvmf_bdev_io_wait 00:13:47.733 ************************************ 00:13:47.733 18:39:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:47.733 18:39:03 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:47.733 18:39:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:47.734 18:39:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.734 18:39:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:47.734 ************************************ 00:13:47.734 START TEST nvmf_queue_depth 00:13:47.734 ************************************ 00:13:47.734 18:39:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:47.734 * Looking for test storage... 00:13:47.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.734 18:39:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.734 18:39:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.734 18:39:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:53.009 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:53.009 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:53.009 Found net devices under 0000:86:00.0: cvl_0_0 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:53.009 Found net devices under 0000:86:00.1: cvl_0_1 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.009 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.010 18:39:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:13:53.010 00:13:53.010 --- 10.0.0.2 ping statistics --- 00:13:53.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.010 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:13:53.010 00:13:53.010 --- 10.0.0.1 ping statistics --- 00:13:53.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.010 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1054194 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1054194 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1054194 ']' 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:53.010 18:39:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.010 [2024-07-15 18:39:09.262804] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:53.010 [2024-07-15 18:39:09.262848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.010 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.010 [2024-07-15 18:39:09.319273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.010 [2024-07-15 18:39:09.397917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.010 [2024-07-15 18:39:09.397952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.010 [2024-07-15 18:39:09.397959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.010 [2024-07-15 18:39:09.397965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.010 [2024-07-15 18:39:09.397970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.010 [2024-07-15 18:39:09.397991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.578 [2024-07-15 18:39:10.109514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.578 Malloc0 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.578 [2024-07-15 18:39:10.171150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1054557 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1054557 /var/tmp/bdevperf.sock 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1054557 ']' 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.578 18:39:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.578 [2024-07-15 18:39:10.221738] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:13:53.578 [2024-07-15 18:39:10.221778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054557 ] 00:13:53.578 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.578 [2024-07-15 18:39:10.274838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.837 [2024-07-15 18:39:10.355108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.405 18:39:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.405 18:39:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:54.405 18:39:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:54.405 18:39:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.405 18:39:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:54.664 NVMe0n1 00:13:54.664 18:39:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.664 18:39:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:54.664 Running I/O for 10 seconds... 00:14:04.773 00:14:04.773 Latency(us) 00:14:04.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.773 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:04.773 Verification LBA range: start 0x0 length 0x4000 00:14:04.773 NVMe0n1 : 10.06 12191.21 47.62 0.00 0.00 83728.64 19717.79 60179.14 00:14:04.773 =================================================================================================================== 00:14:04.773 Total : 12191.21 47.62 0.00 0.00 83728.64 19717.79 60179.14 00:14:04.773 0 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1054557 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1054557 ']' 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1054557 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1054557 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1054557' 00:14:04.773 killing process with pid 1054557 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1054557 00:14:04.773 Received shutdown signal, test time was about 10.000000 seconds 00:14:04.773 00:14:04.773 Latency(us) 00:14:04.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.773 =================================================================================================================== 00:14:04.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.773 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1054557 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.033 rmmod nvme_tcp 00:14:05.033 rmmod nvme_fabrics 00:14:05.033 rmmod nvme_keyring 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1054194 ']' 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1054194 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1054194 ']' 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1054194 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.033 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1054194 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1054194' 00:14:05.292 killing process with pid 1054194 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1054194 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1054194 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.292 18:39:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.829 18:39:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.830 00:14:07.830 real 0m20.103s 00:14:07.830 user 0m25.012s 00:14:07.830 sys 0m5.373s 00:14:07.830 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.830 18:39:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:07.830 ************************************ 00:14:07.830 END TEST nvmf_queue_depth 00:14:07.830 ************************************ 00:14:07.830 18:39:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.830 18:39:24 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:07.830 18:39:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.830 18:39:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.830 18:39:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.830 ************************************ 00:14:07.830 START TEST nvmf_target_multipath 00:14:07.830 ************************************ 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:07.830 * Looking for test storage... 00:14:07.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.830 18:39:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.106 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:13.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:13.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:13.107 Found net devices under 0000:86:00.0: cvl_0_0 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:13.107 Found net devices under 0000:86:00.1: cvl_0_1 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:14:13.107 00:14:13.107 --- 10.0.0.2 ping statistics --- 00:14:13.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.107 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:14:13.107 00:14:13.107 --- 10.0.0.1 ping statistics --- 00:14:13.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.107 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:13.107 only one NIC for nvmf test 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.107 rmmod nvme_tcp 00:14:13.107 rmmod nvme_fabrics 00:14:13.107 rmmod nvme_keyring 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.107 18:39:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.016 00:14:15.016 real 0m7.491s 00:14:15.016 user 0m1.599s 00:14:15.016 sys 0m3.878s 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.016 18:39:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:15.016 ************************************ 00:14:15.016 END TEST nvmf_target_multipath 00:14:15.016 ************************************ 00:14:15.016 18:39:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.016 18:39:31 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:15.016 18:39:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.016 18:39:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.016 18:39:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.016 ************************************ 00:14:15.016 START TEST nvmf_zcopy 00:14:15.016 ************************************ 00:14:15.016 18:39:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:15.016 * Looking for test storage... 00:14:15.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.275 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.276 18:39:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:20.548 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:20.548 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.548 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:20.549 Found net devices under 0000:86:00.0: cvl_0_0 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:20.549 Found net devices under 0000:86:00.1: cvl_0_1 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.549 18:39:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:14:20.549 00:14:20.549 --- 10.0.0.2 ping statistics --- 00:14:20.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.549 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:14:20.549 00:14:20.549 --- 10.0.0.1 ping statistics --- 00:14:20.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.549 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1063211 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1063211 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1063211 ']' 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.549 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:20.549 [2024-07-15 18:39:37.142197] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:14:20.549 [2024-07-15 18:39:37.142247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.549 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.549 [2024-07-15 18:39:37.197889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.808 [2024-07-15 18:39:37.276123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.808 [2024-07-15 18:39:37.276157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.808 [2024-07-15 18:39:37.276163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.808 [2024-07-15 18:39:37.276170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.808 [2024-07-15 18:39:37.276175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.808 [2024-07-15 18:39:37.276192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.375 [2024-07-15 18:39:37.985772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.375 18:39:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.375 [2024-07-15 18:39:38.001877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.375 malloc0 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.375 { 00:14:21.375 "params": { 00:14:21.375 "name": "Nvme$subsystem", 00:14:21.375 "trtype": "$TEST_TRANSPORT", 00:14:21.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.375 "adrfam": "ipv4", 00:14:21.375 "trsvcid": "$NVMF_PORT", 00:14:21.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.375 "hdgst": ${hdgst:-false}, 00:14:21.375 "ddgst": ${ddgst:-false} 00:14:21.375 }, 00:14:21.375 "method": "bdev_nvme_attach_controller" 00:14:21.375 } 00:14:21.375 EOF 00:14:21.375 )") 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:21.375 18:39:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.375 "params": { 00:14:21.375 "name": "Nvme1", 00:14:21.375 "trtype": "tcp", 00:14:21.375 "traddr": "10.0.0.2", 00:14:21.375 "adrfam": "ipv4", 00:14:21.375 "trsvcid": "4420", 00:14:21.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.375 "hdgst": false, 00:14:21.375 "ddgst": false 00:14:21.375 }, 00:14:21.375 "method": "bdev_nvme_attach_controller" 00:14:21.375 }' 00:14:21.375 [2024-07-15 18:39:38.081397] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:14:21.375 [2024-07-15 18:39:38.081444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063449 ] 00:14:21.633 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.633 [2024-07-15 18:39:38.135570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.633 [2024-07-15 18:39:38.208412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.891 Running I/O for 10 seconds... 00:14:31.869 00:14:31.869 Latency(us) 00:14:31.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.869 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:31.869 Verification LBA range: start 0x0 length 0x1000 00:14:31.869 Nvme1n1 : 10.01 8670.00 67.73 0.00 0.00 14721.66 1766.62 25986.45 00:14:31.869 =================================================================================================================== 00:14:31.869 Total : 8670.00 67.73 0.00 0.00 14721.66 1766.62 25986.45 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1065100 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:32.128 { 00:14:32.128 "params": { 00:14:32.128 "name": "Nvme$subsystem", 00:14:32.128 "trtype": "$TEST_TRANSPORT", 00:14:32.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.128 "adrfam": "ipv4", 00:14:32.128 "trsvcid": "$NVMF_PORT", 00:14:32.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.128 "hdgst": ${hdgst:-false}, 00:14:32.128 "ddgst": ${ddgst:-false} 00:14:32.128 }, 00:14:32.128 "method": "bdev_nvme_attach_controller" 00:14:32.128 } 00:14:32.128 EOF 00:14:32.128 )") 00:14:32.128 [2024-07-15 18:39:48.751859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.128 [2024-07-15 18:39:48.751889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:32.128 [2024-07-15 18:39:48.759847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.128 [2024-07-15 18:39:48.759860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:32.128 18:39:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:32.128 "params": { 00:14:32.128 "name": "Nvme1", 00:14:32.128 "trtype": "tcp", 00:14:32.129 "traddr": "10.0.0.2", 00:14:32.129 "adrfam": "ipv4", 00:14:32.129 "trsvcid": "4420", 00:14:32.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.129 "hdgst": false, 00:14:32.129 "ddgst": false 00:14:32.129 }, 00:14:32.129 "method": "bdev_nvme_attach_controller" 00:14:32.129 }' 00:14:32.129 [2024-07-15 18:39:48.767868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.767879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 [2024-07-15 18:39:48.775891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.775903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 [2024-07-15 18:39:48.783913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.783924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 [2024-07-15 18:39:48.790662] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:14:32.129 [2024-07-15 18:39:48.790706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065100 ] 00:14:32.129 [2024-07-15 18:39:48.791935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.791946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 [2024-07-15 18:39:48.799956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.799971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 [2024-07-15 18:39:48.807979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.807990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.129 [2024-07-15 18:39:48.816001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.816011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 [2024-07-15 18:39:48.824021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.824032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.129 [2024-07-15 18:39:48.832044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.129 [2024-07-15 18:39:48.832055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.840064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.840075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.844276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.388 [2024-07-15 18:39:48.848086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.848097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.856109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.856123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.864131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.864142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.872152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.872162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.880172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.880183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.888195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.888216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.896216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.896233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.904242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.904268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.912263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.912273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.919812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.388 [2024-07-15 18:39:48.920285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.920298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.928306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.928317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.936335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.936355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.944354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.944367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.952375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.952388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.960391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.960402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.968412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.968422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.976435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.976446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.984458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.984469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:48.992477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:48.992488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.000520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.000538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.008531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.008549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.016545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.016558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.024564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.024578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.032587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.032600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.040609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.040620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.048643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.048655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.056653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.056663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.064675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.064685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.072700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.072714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.080723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.080736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.388 [2024-07-15 18:39:49.088745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.388 [2024-07-15 18:39:49.088758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.096766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.096777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.104792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.104810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 Running I/O for 5 seconds... 00:14:32.646 [2024-07-15 18:39:49.112808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.112818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.123882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.123902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.132587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.132606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.141455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.141474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.149924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.149942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.158539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.158557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.165439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.165457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.176793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.176812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.185823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.185841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.194669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.194687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.203683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.203701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.212923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.212942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.222315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.646 [2024-07-15 18:39:49.222334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.646 [2024-07-15 18:39:49.231545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.231563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.240729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.240748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.249915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.249934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.259255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.259278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.268297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.268317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.277570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.277589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.286666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.286684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.295262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.295280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.303979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.303997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.312648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.312667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.321452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.321471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.330558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.330577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.339850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.339868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.647 [2024-07-15 18:39:49.349056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.647 [2024-07-15 18:39:49.349075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.358211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.358238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.365039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.365057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.375432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.375450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.384053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.384072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.390855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.390874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.401948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.401967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.410735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.410753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.419329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.419348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.428004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.428027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.437272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.437291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.445903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.445922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.454431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.454450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.463551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.463569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.472334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.472353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.480857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.480875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.489509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.489528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.498244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.498263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.505043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.505062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.515880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.515898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.524677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.524695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.534108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.534127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.543788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.543807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.552202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.552221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.560704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.560723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.569650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.569669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.578942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.578960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.587581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.587600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.596732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.596755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.906 [2024-07-15 18:39:49.605144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.906 [2024-07-15 18:39:49.605163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.165 [2024-07-15 18:39:49.614172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.165 [2024-07-15 18:39:49.614192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.165 [2024-07-15 18:39:49.623306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.165 [2024-07-15 18:39:49.623325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.165 [2024-07-15 18:39:49.632281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.165 [2024-07-15 18:39:49.632299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.640798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.640816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.650139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.650158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.658750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.658768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.667318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.667336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.675962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.675980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.684395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.684413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.692995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.693014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.701868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.701887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.710611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.710629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.719545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.719563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.728929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.728948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.738086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.738105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.747364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.747382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.756768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.756787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.765334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.765356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.774405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.774423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.783717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.783737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.793006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.793025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.799890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.799909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.811116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.811135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.818527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.818545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.828630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.828649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.837207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.837231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.845276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.845294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.854232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.854250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.863532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.863550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.166 [2024-07-15 18:39:49.872672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.166 [2024-07-15 18:39:49.872690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.882044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.882063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.891314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.891333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.900648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.900666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.909165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.909183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.916015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.916034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.926344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.926366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.935498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.935516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.944093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.944111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.953028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.953047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.425 [2024-07-15 18:39:49.959975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.425 [2024-07-15 18:39:49.959994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:49.970459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:49.970478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:49.979210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:49.979235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:49.987714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:49.987733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:49.996339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:49.996357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.005097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.005116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.014338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.014357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.023449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.023468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.033265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.033292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.042092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.042110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.051406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.051425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.060319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.060340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.069376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.069395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.078148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.078166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.086804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.086823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.095744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.095764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.104518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.104537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.113925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.113945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.122615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.122634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.426 [2024-07-15 18:39:50.131291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.426 [2024-07-15 18:39:50.131310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.141119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.141139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.150614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.150633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.159855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.159875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.169112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.169132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.178338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.178357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.185159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.185179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.195344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.195363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.204029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.204048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.211121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.211140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.221137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.221156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.229630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.229649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.238895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.238914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.248113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.248132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.257241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.257261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.265885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.265904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.274693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.274714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.283459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.283479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.292017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.292036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.301303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.301322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.309750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.309769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.319176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.319195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.326321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.326340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.336528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.336546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.345434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.345453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.354579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.354598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.363888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.363908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.373067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.373086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.381651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.686 [2024-07-15 18:39:50.381671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.686 [2024-07-15 18:39:50.391055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.687 [2024-07-15 18:39:50.391075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.400426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.400445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.409868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.409887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.419240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.419259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.427857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.427876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.436809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.436829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.445592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.445610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.454249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.454268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.463330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.463349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.472108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.472128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.481147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.481166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.490418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.490438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.499618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.499636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.508196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.508213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.516818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.516837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.525403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.525422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.534664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.534683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.543440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.543459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.550328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.550347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.561048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.561067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.569952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.569970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.579111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.579130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.587862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.587880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.597206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.597230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.606274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.606296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.615492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.615511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.624710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.624728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.633731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.633749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.642883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.642900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.946 [2024-07-15 18:39:50.649719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.946 [2024-07-15 18:39:50.649737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.660063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.660082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.669530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.669549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.678699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.678717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.687853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.687871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.696654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.696673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.705278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.705297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.714619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.714638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.723250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.723268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.732342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.732360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.742036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.742055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.750742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.750761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.759362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.759381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.768053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.768071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.776816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.776838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.785451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.785480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.794585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.794603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.803607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.803625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.812179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.812198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.820744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.820763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.829793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.829813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.838323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.838341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.846815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.846833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.855307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.855325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.864743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.864762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.873363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.205 [2024-07-15 18:39:50.873381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.205 [2024-07-15 18:39:50.882409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.206 [2024-07-15 18:39:50.882428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.206 [2024-07-15 18:39:50.891705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.206 [2024-07-15 18:39:50.891725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.206 [2024-07-15 18:39:50.899069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.206 [2024-07-15 18:39:50.899087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.206 [2024-07-15 18:39:50.909425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.206 [2024-07-15 18:39:50.909443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.918693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.918712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.927278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.927296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.935865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.935884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.944779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.944802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.954071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.954089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.963388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.963406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.971898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.971916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.981265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.981283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.990263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.990281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:50.999437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:50.999455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.008221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.008247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.017524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.017543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.026101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.026120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.034967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.034985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.043358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.043376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.052524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.052542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.061568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.061586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.070746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.070765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.079330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.079349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.088353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.088372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.096822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.096842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.105629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.105647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.115544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.115569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.122593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.122612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.132725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.132745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.141441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.141460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.149790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.149808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.156587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.156605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.465 [2024-07-15 18:39:51.167909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.465 [2024-07-15 18:39:51.167928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.176735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.176754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.185923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.185942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.194562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.194581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.203302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.203320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.212377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.212396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.221561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.221579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.230850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.230868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.239455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.239473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.248126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.248145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.257394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.257412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.266577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.266595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.275919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.275937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.285289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.285308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.294500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.294520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.303812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.303832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.313661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.313680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.322188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.322206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.330608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.330626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.339717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.339735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.349035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.349054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.358323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.358342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.366902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.366920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.373826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.373845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.384921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.384940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.393631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.393650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.402260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.402279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.410594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.410612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.420247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.420266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.725 [2024-07-15 18:39:51.428927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.725 [2024-07-15 18:39:51.428946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.437549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.437569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.446066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.446085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.454783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.454802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.463332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.463350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.472053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.472075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.480692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.480711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.489263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.489283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.498502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.498524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.507051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.507071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.516193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.516213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.525404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.525424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.534800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.534819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.544121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.544139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.553247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.553266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.562108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.562127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.570556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.570574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.579183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.579203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.588449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.588469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.597605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.597624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.606973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.606992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.615394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.615413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.624389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.624408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.633751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.633770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.640500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.640518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.651709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.651728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.660524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.660543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.669111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.669129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.678346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.985 [2024-07-15 18:39:51.678365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.985 [2024-07-15 18:39:51.687099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.986 [2024-07-15 18:39:51.687118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.695896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.695916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.705071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.705090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.714149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.714168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.723260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.723279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.731921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.731940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.741353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.741372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.750656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.750675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.759510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.759529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.768059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.768078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.776708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.776727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.785903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.785922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.795120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.795138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.804457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.804487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.813875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.813895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.823099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.823118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.831684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.831703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.840652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.840672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.849936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.849955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.859116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.859135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.868300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.868320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.876770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.876788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.885317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.885335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.894515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.894533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.901465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.901484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.911767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.911786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.920597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.920617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.929327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.929346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.938663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.938682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.246 [2024-07-15 18:39:51.947471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.246 [2024-07-15 18:39:51.947489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:51.956140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:51.956162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:51.964737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:51.964755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:51.973419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:51.973438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:51.982893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:51.982911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:51.991411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:51.991430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.000092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.000111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.009366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.009385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.017758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.017776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.026126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.026144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.034417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.034434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.043002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.043020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.052084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.052103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.060964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.060983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.070831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.070849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.079657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.079675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.088325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.088345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.097572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.097591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.106674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.106692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.115840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.115859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.125275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.125298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.134645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.134663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.143870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.143889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.152291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.152309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.160916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.160935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.167730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.167749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.178900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.178919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.187788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.187807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.197156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.197174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.507 [2024-07-15 18:39:52.206367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.507 [2024-07-15 18:39:52.206385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.215372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.215391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.224022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.224040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.233330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.233348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.242586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.242604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.251786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.251805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.258642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.258660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.269612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.269631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.278324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.278343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.287570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.287588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.296219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.296247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.305355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.305373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.314774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.314795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.323286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.323305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.331931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.331950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.340961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.340979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.350274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.350292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.359462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.359480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.367819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.367836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.377235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.377254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.386712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.386730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.393610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.393628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.404704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.404723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.413559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.413578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.422797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.422816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.431951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.431970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.440636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.440654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.449181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.449200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.457853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.457872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.767 [2024-07-15 18:39:52.467020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.767 [2024-07-15 18:39:52.467042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.476506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.476526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.486001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.486019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.495149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.495167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.503952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.503970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.513100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.513118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.522502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.522520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.531281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.531299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.540557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.540576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.549904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.549923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.559198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.559217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.568391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.568410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.576919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.576937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.585130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.585148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.594292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.594311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.603547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.603565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.612798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.612816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.621497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.621515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.630820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.630837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.639376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.639394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.648462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.648480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.657434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.657453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.666532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.666550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.675637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.675655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.684975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.684994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.693489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.693507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.702695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.702713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.711821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.711840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.721016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.721036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.027 [2024-07-15 18:39:52.729549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.027 [2024-07-15 18:39:52.729568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.738879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.738897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.753485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.753504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.761288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.761307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.769012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.769031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.777993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.778012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.786779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.786796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.795862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.795881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.805126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.805145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.814376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.814395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.822964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.822982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.831558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.831576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.840793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.840811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.850597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.850615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.858814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.858832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.868346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.868365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.877033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.877053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.886420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.886439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.895831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.895849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.904624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.904644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.913294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.913313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.921707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.921726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.930333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.930353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.939022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.939042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.948371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.948390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.957529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.957548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.966732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.966751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.975832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.975852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.287 [2024-07-15 18:39:52.985113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.287 [2024-07-15 18:39:52.985133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:52.994327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:52.994347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.003596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.003615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.012299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.012318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.020865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.020884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.030013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.030031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.039265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.039284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.048647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.048666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.057215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.057241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.066514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.066534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.075015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.075034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.083648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.083667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.093109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.093129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.101672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.101691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.111041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.111060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.119817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.119836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.129186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.129205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.138508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.138526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.547 [2024-07-15 18:39:53.147272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.547 [2024-07-15 18:39:53.147291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.156028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.156047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.164658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.164677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.173352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.173373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.182675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.182694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.191651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.191670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.200829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.200848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.209405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.209425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.218057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.218076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.226811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.226830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.235572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.235590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.244870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.244889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.548 [2024-07-15 18:39:53.254283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.548 [2024-07-15 18:39:53.254303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.261165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.261183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.272350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.272369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.281164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.281184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.290435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.290453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.299595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.299613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.308841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.308859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.318209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.318239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.324959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.324978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.336156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.336176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.344943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.344962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.353542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.353560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.362891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.362909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.371341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.371359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.380163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.380181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.388757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.388775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.397369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.397388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.406627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.406645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.416132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.416151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.424870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.424888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.431749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.431767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.442722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.442740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.451307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.451325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.460345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.460364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.468975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.468994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.477546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.477565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.486154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.486176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.494820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.494839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.504012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.504031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.806 [2024-07-15 18:39:53.512627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.806 [2024-07-15 18:39:53.512646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.519986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.520005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.530213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.530237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.538978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.538996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.548239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.548257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.557387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.557406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.566520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.566538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.573461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.573479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.584524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.584543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.593144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.593162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.602365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.602385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.609254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.609272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.619463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.619481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.628296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.628314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.637290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.637308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.645848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.645866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.654322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.654347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.662907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.662925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.672001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.672020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.680655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.680673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.689855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.689874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.698808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.698827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.065 [2024-07-15 18:39:53.707949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.065 [2024-07-15 18:39:53.707968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.066 [2024-07-15 18:39:53.714714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.066 [2024-07-15 18:39:53.714733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.066 [2024-07-15 18:39:53.725835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.066 [2024-07-15 18:39:53.725854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.066 [2024-07-15 18:39:53.734627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.066 [2024-07-15 18:39:53.734646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.066 [2024-07-15 18:39:53.743108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.066 [2024-07-15 18:39:53.743127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.066 [2024-07-15 18:39:53.749894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.066 [2024-07-15 18:39:53.749911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.066 [2024-07-15 18:39:53.760625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.066 [2024-07-15 18:39:53.760645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.066 [2024-07-15 18:39:53.769335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.066 [2024-07-15 18:39:53.769355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.778652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.778672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.787738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.787758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.796747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.796767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.805125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.805143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.813775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.813793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.822469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.822493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.831203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.831221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.840380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.840409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.849681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.849699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.859016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.859034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.867866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.867885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.876930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.876949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.885449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.885467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.893963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.893981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.903232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.903265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.912567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.912586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.921019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.921037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.930278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.930297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.939492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.939510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.947780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.947799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.956884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.343 [2024-07-15 18:39:53.956902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.343 [2024-07-15 18:39:53.965277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:53.965295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:53.974384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:53.974403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:53.983667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:53.983685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:53.992815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:53.992833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:53.999953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:53.999972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:54.009664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:54.009682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:54.018692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:54.018711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:54.027989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:54.028007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:54.037391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:54.037409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.344 [2024-07-15 18:39:54.046018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.344 [2024-07-15 18:39:54.046037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.055833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.055852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.064905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.064924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.073523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.073542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.082729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.082748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.091941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.091959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.101269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.101288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.110516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.110534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.119159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.119178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.127540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.127559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 00:14:37.603 Latency(us) 00:14:37.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.603 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:37.603 Nvme1n1 : 5.01 16644.92 130.04 0.00 0.00 7682.38 3348.03 18805.98 00:14:37.603 =================================================================================================================== 00:14:37.603 Total : 16644.92 130.04 0.00 0.00 7682.38 3348.03 18805.98 00:14:37.603 [2024-07-15 18:39:54.134007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.134024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.142026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.142040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.603 [2024-07-15 18:39:54.150046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.603 [2024-07-15 18:39:54.150057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.158079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.158095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.166097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.166111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.174113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.174124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.182134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.182144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.190152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.190163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.198173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.198184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.206195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.206207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.214220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.214240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.222243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.222254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.230263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.230274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.238280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.238290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.246302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.246312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.254328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.254339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.262348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.262360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.270370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.270379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.278389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.278398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.286415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.286428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.294437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.294448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.604 [2024-07-15 18:39:54.302457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.604 [2024-07-15 18:39:54.302467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.862 [2024-07-15 18:39:54.310479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.862 [2024-07-15 18:39:54.310493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1065100) - No such process 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1065100 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:37.862 delay0 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.862 18:39:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:37.862 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.862 [2024-07-15 18:39:54.443932] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:44.507 Initializing NVMe Controllers 00:14:44.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:44.507 Initialization complete. Launching workers. 00:14:44.507 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 98 00:14:44.507 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 385, failed to submit 33 00:14:44.507 success 178, unsuccess 207, failed 0 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.507 rmmod nvme_tcp 00:14:44.507 rmmod nvme_fabrics 00:14:44.507 rmmod nvme_keyring 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1063211 ']' 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1063211 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1063211 ']' 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1063211 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1063211 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1063211' 00:14:44.507 killing process with pid 1063211 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1063211 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1063211 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.507 18:40:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.416 18:40:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.416 00:14:46.416 real 0m31.265s 00:14:46.416 user 0m42.947s 00:14:46.416 sys 0m10.490s 00:14:46.416 18:40:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.416 18:40:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.416 ************************************ 00:14:46.416 END TEST nvmf_zcopy 00:14:46.416 ************************************ 00:14:46.416 18:40:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:46.416 18:40:02 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:46.416 18:40:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.416 18:40:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.416 18:40:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.416 ************************************ 00:14:46.416 START TEST nvmf_nmic 00:14:46.416 ************************************ 00:14:46.416 18:40:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:46.416 * Looking for test storage... 00:14:46.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.416 18:40:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:51.689 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:51.689 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.689 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:51.690 Found net devices under 0000:86:00.0: cvl_0_0 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:51.690 Found net devices under 0000:86:00.1: cvl_0_1 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:14:51.690 00:14:51.690 --- 10.0.0.2 ping statistics --- 00:14:51.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.690 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:14:51.690 00:14:51.690 --- 10.0.0.1 ping statistics --- 00:14:51.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.690 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1070415 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1070415 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1070415 ']' 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.690 18:40:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.690 [2024-07-15 18:40:07.737324] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:14:51.690 [2024-07-15 18:40:07.737367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.690 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.690 [2024-07-15 18:40:07.793459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.690 [2024-07-15 18:40:07.874394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.690 [2024-07-15 18:40:07.874431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.690 [2024-07-15 18:40:07.874440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.690 [2024-07-15 18:40:07.874446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.690 [2024-07-15 18:40:07.874452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.690 [2024-07-15 18:40:07.874492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.690 [2024-07-15 18:40:07.874591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.690 [2024-07-15 18:40:07.874733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.690 [2024-07-15 18:40:07.874735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.949 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.949 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:51.949 18:40:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.949 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.949 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.949 18:40:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.949 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.950 [2024-07-15 18:40:08.598115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.950 Malloc0 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:51.950 [2024-07-15 18:40:08.649950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:51.950 test case1: single bdev can't be used in multiple subsystems 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.950 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:52.208 [2024-07-15 18:40:08.677883] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:52.208 [2024-07-15 18:40:08.677902] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:52.208 [2024-07-15 18:40:08.677909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.208 request: 00:14:52.208 { 00:14:52.208 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:52.208 "namespace": { 00:14:52.208 "bdev_name": "Malloc0", 00:14:52.208 "no_auto_visible": false 00:14:52.208 }, 00:14:52.208 "method": "nvmf_subsystem_add_ns", 00:14:52.208 "req_id": 1 00:14:52.208 } 00:14:52.208 Got JSON-RPC error response 00:14:52.208 response: 00:14:52.208 { 00:14:52.208 "code": -32602, 00:14:52.208 "message": "Invalid parameters" 00:14:52.208 } 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:52.208 Adding namespace failed - expected result. 00:14:52.208 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:52.209 test case2: host connect to nvmf target in multiple paths 00:14:52.209 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:52.209 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.209 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:52.209 [2024-07-15 18:40:08.690019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:52.209 18:40:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.209 18:40:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.587 18:40:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:54.522 18:40:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:54.522 18:40:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:54.522 18:40:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.522 18:40:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:54.522 18:40:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:56.428 18:40:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:56.428 18:40:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:56.428 18:40:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.428 18:40:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:56.428 18:40:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.428 18:40:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:56.428 18:40:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:56.428 [global] 00:14:56.428 thread=1 00:14:56.428 invalidate=1 00:14:56.428 rw=write 00:14:56.428 time_based=1 00:14:56.428 runtime=1 00:14:56.428 ioengine=libaio 00:14:56.428 direct=1 00:14:56.428 bs=4096 00:14:56.428 iodepth=1 00:14:56.428 norandommap=0 00:14:56.428 numjobs=1 00:14:56.428 00:14:56.428 verify_dump=1 00:14:56.428 verify_backlog=512 00:14:56.428 verify_state_save=0 00:14:56.428 do_verify=1 00:14:56.428 verify=crc32c-intel 00:14:56.428 [job0] 00:14:56.428 filename=/dev/nvme0n1 00:14:56.691 Could not set queue depth (nvme0n1) 00:14:56.691 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:56.691 fio-3.35 00:14:56.691 Starting 1 thread 00:14:58.071 00:14:58.071 job0: (groupid=0, jobs=1): err= 0: pid=1071488: Mon Jul 15 18:40:14 2024 00:14:58.071 read: IOPS=1961, BW=7844KiB/s (8032kB/s)(7852KiB/1001msec) 00:14:58.071 slat (nsec): min=6272, max=30466, avg=7214.66, stdev=1255.63 00:14:58.071 clat (usec): min=235, max=620, avg=297.09, stdev=38.35 00:14:58.071 lat (usec): min=242, max=648, avg=304.30, stdev=38.59 00:14:58.071 clat percentiles (usec): 00:14:58.071 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:14:58.071 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 314], 60.00th=[ 322], 00:14:58.071 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 334], 00:14:58.071 | 99.00th=[ 424], 99.50th=[ 457], 99.90th=[ 515], 99.95th=[ 619], 00:14:58.071 | 99.99th=[ 619] 00:14:58.071 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:58.071 slat (usec): min=9, max=23944, avg=22.07, stdev=528.89 00:14:58.071 clat (usec): min=149, max=434, avg=170.09, stdev=19.15 00:14:58.071 lat (usec): min=159, max=24379, avg=192.16, stdev=535.07 00:14:58.071 clat percentiles (usec): 00:14:58.072 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:14:58.072 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:14:58.072 | 70.00th=[ 169], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 202], 00:14:58.072 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 281], 99.95th=[ 396], 00:14:58.072 | 99.99th=[ 437] 00:14:58.072 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:14:58.072 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:58.072 lat (usec) : 250=59.36%, 500=40.59%, 750=0.05% 00:14:58.072 cpu : usr=2.30%, sys=3.40%, ctx=4015, majf=0, minf=2 00:14:58.072 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:58.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.072 issued rwts: total=1963,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.072 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:58.072 00:14:58.072 Run status group 0 (all jobs): 00:14:58.072 READ: bw=7844KiB/s (8032kB/s), 7844KiB/s-7844KiB/s (8032kB/s-8032kB/s), io=7852KiB (8040kB), run=1001-1001msec 00:14:58.072 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:14:58.072 00:14:58.072 Disk stats (read/write): 00:14:58.072 nvme0n1: ios=1677/2048, merge=0/0, ticks=1467/332, in_queue=1799, util=98.50% 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.072 rmmod nvme_tcp 00:14:58.072 rmmod nvme_fabrics 00:14:58.072 rmmod nvme_keyring 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1070415 ']' 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1070415 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1070415 ']' 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1070415 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.072 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1070415 00:14:58.331 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:58.331 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:58.331 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1070415' 00:14:58.331 killing process with pid 1070415 00:14:58.331 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1070415 00:14:58.331 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1070415 00:14:58.331 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.331 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:58.332 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:58.332 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.332 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:58.332 18:40:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.332 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.332 18:40:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.869 18:40:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.869 00:15:00.869 real 0m14.086s 00:15:00.869 user 0m34.990s 00:15:00.869 sys 0m4.160s 00:15:00.869 18:40:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.869 18:40:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:00.869 ************************************ 00:15:00.869 END TEST nvmf_nmic 00:15:00.869 ************************************ 00:15:00.869 18:40:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:00.869 18:40:17 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:00.869 18:40:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.869 18:40:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.869 18:40:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.869 ************************************ 00:15:00.869 START TEST nvmf_fio_target 00:15:00.869 ************************************ 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:00.869 * Looking for test storage... 00:15:00.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.869 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.870 18:40:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:06.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:06.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:06.142 Found net devices under 0000:86:00.0: cvl_0_0 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:06.142 Found net devices under 0000:86:00.1: cvl_0_1 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.142 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:15:06.143 00:15:06.143 --- 10.0.0.2 ping statistics --- 00:15:06.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.143 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:15:06.143 00:15:06.143 --- 10.0.0.1 ping statistics --- 00:15:06.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.143 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1075213 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1075213 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1075213 ']' 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.143 18:40:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.143 [2024-07-15 18:40:22.638538] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:15:06.143 [2024-07-15 18:40:22.638586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.143 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.143 [2024-07-15 18:40:22.700546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.143 [2024-07-15 18:40:22.779638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.143 [2024-07-15 18:40:22.779677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.143 [2024-07-15 18:40:22.779684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.143 [2024-07-15 18:40:22.779690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.143 [2024-07-15 18:40:22.779695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.143 [2024-07-15 18:40:22.779758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.143 [2024-07-15 18:40:22.779778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.143 [2024-07-15 18:40:22.780033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.143 [2024-07-15 18:40:22.780035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.087 [2024-07-15 18:40:23.632761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.087 18:40:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:07.398 18:40:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:07.398 18:40:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:07.398 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:07.398 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:07.657 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:07.657 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:07.916 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:07.916 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:08.174 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:08.174 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:08.174 18:40:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:08.432 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:08.432 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:08.690 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:08.690 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:08.948 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:08.948 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:08.948 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.206 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:09.206 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.464 18:40:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.464 [2024-07-15 18:40:26.098997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.464 18:40:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:09.722 18:40:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:09.980 18:40:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:10.915 18:40:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:10.915 18:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:10.915 18:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.915 18:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:10.915 18:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:10.915 18:40:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:13.446 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:13.446 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:13.446 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.446 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:13.446 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.446 18:40:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:13.446 18:40:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:13.446 [global] 00:15:13.446 thread=1 00:15:13.446 invalidate=1 00:15:13.446 rw=write 00:15:13.446 time_based=1 00:15:13.446 runtime=1 00:15:13.446 ioengine=libaio 00:15:13.446 direct=1 00:15:13.446 bs=4096 00:15:13.446 iodepth=1 00:15:13.446 norandommap=0 00:15:13.446 numjobs=1 00:15:13.446 00:15:13.446 verify_dump=1 00:15:13.446 verify_backlog=512 00:15:13.446 verify_state_save=0 00:15:13.446 do_verify=1 00:15:13.446 verify=crc32c-intel 00:15:13.446 [job0] 00:15:13.446 filename=/dev/nvme0n1 00:15:13.446 [job1] 00:15:13.446 filename=/dev/nvme0n2 00:15:13.446 [job2] 00:15:13.446 filename=/dev/nvme0n3 00:15:13.446 [job3] 00:15:13.446 filename=/dev/nvme0n4 00:15:13.446 Could not set queue depth (nvme0n1) 00:15:13.446 Could not set queue depth (nvme0n2) 00:15:13.446 Could not set queue depth (nvme0n3) 00:15:13.446 Could not set queue depth (nvme0n4) 00:15:13.446 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:13.446 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:13.446 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:13.446 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:13.446 fio-3.35 00:15:13.446 Starting 4 threads 00:15:14.825 00:15:14.825 job0: (groupid=0, jobs=1): err= 0: pid=1076587: Mon Jul 15 18:40:31 2024 00:15:14.825 read: IOPS=1881, BW=7524KiB/s (7705kB/s)(7532KiB/1001msec) 00:15:14.825 slat (nsec): min=7041, max=37123, avg=8179.91, stdev=1654.83 00:15:14.825 clat (usec): min=204, max=463, avg=313.30, stdev=40.64 00:15:14.825 lat (usec): min=212, max=471, avg=321.48, stdev=40.70 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[ 233], 5.00th=[ 247], 10.00th=[ 265], 20.00th=[ 297], 00:15:14.825 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:15:14.825 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 416], 00:15:14.825 | 99.00th=[ 449], 99.50th=[ 453], 99.90th=[ 461], 99.95th=[ 465], 00:15:14.825 | 99.99th=[ 465] 00:15:14.825 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:14.825 slat (nsec): min=10507, max=41546, avg=11984.38, stdev=1879.63 00:15:14.825 clat (usec): min=131, max=283, avg=174.58, stdev=13.20 00:15:14.825 lat (usec): min=143, max=324, avg=186.56, stdev=13.53 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:15:14.825 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:15:14.825 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:15:14.825 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 243], 99.95th=[ 253], 00:15:14.825 | 99.99th=[ 285] 00:15:14.825 bw ( KiB/s): min= 8192, max= 8192, per=58.46%, avg=8192.00, stdev= 0.00, samples=1 00:15:14.825 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:14.825 lat (usec) : 250=54.90%, 500=45.10% 00:15:14.825 cpu : usr=3.30%, sys=6.30%, ctx=3932, majf=0, minf=1 00:15:14.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 issued rwts: total=1883,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.825 job1: (groupid=0, jobs=1): err= 0: pid=1076588: Mon Jul 15 18:40:31 2024 00:15:14.825 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:15:14.825 slat (nsec): min=9577, max=27635, avg=21749.41, stdev=3211.40 00:15:14.825 clat (usec): min=40595, max=41885, avg=41006.57, stdev=246.24 00:15:14.825 lat (usec): min=40615, max=41909, avg=41028.32, stdev=247.36 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:15:14.825 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:14.825 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:14.825 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:14.825 | 99.99th=[41681] 00:15:14.825 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:15:14.825 slat (nsec): min=10381, max=42740, avg=12073.22, stdev=1997.01 00:15:14.825 clat (usec): min=167, max=292, avg=197.44, stdev=16.44 00:15:14.825 lat (usec): min=178, max=307, avg=209.51, stdev=16.73 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:15:14.825 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:15:14.825 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:15:14.825 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 293], 00:15:14.825 | 99.99th=[ 293] 00:15:14.825 bw ( KiB/s): min= 4096, max= 4096, per=29.23%, avg=4096.00, stdev= 0.00, samples=1 00:15:14.825 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:14.825 lat (usec) : 250=94.38%, 500=1.50% 00:15:14.825 lat (msec) : 50=4.12% 00:15:14.825 cpu : usr=0.79%, sys=0.49%, ctx=535, majf=0, minf=2 00:15:14.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.825 job2: (groupid=0, jobs=1): err= 0: pid=1076594: Mon Jul 15 18:40:31 2024 00:15:14.825 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:15:14.825 slat (nsec): min=10572, max=24970, avg=23459.68, stdev=2902.66 00:15:14.825 clat (usec): min=40894, max=41041, avg=40978.56, stdev=36.19 00:15:14.825 lat (usec): min=40918, max=41065, avg=41002.02, stdev=35.44 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:14.825 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:14.825 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:14.825 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:14.825 | 99.99th=[41157] 00:15:14.825 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:15:14.825 slat (nsec): min=10653, max=45488, avg=12439.53, stdev=2381.38 00:15:14.825 clat (usec): min=150, max=264, avg=190.66, stdev=14.07 00:15:14.825 lat (usec): min=167, max=276, avg=203.10, stdev=14.24 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:15:14.825 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:15:14.825 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 206], 95.00th=[ 212], 00:15:14.825 | 99.00th=[ 235], 99.50th=[ 258], 99.90th=[ 265], 99.95th=[ 265], 00:15:14.825 | 99.99th=[ 265] 00:15:14.825 bw ( KiB/s): min= 4096, max= 4096, per=29.23%, avg=4096.00, stdev= 0.00, samples=1 00:15:14.825 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:14.825 lat (usec) : 250=95.32%, 500=0.56% 00:15:14.825 lat (msec) : 50=4.12% 00:15:14.825 cpu : usr=0.40%, sys=0.99%, ctx=537, majf=0, minf=1 00:15:14.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.825 job3: (groupid=0, jobs=1): err= 0: pid=1076596: Mon Jul 15 18:40:31 2024 00:15:14.825 read: IOPS=94, BW=379KiB/s (388kB/s)(388KiB/1023msec) 00:15:14.825 slat (nsec): min=8353, max=29057, avg=12522.52, stdev=6208.93 00:15:14.825 clat (usec): min=294, max=41307, avg=9106.57, stdev=16759.71 00:15:14.825 lat (usec): min=305, max=41316, avg=9119.09, stdev=16762.30 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:15:14.825 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 355], 60.00th=[ 371], 00:15:14.825 | 70.00th=[ 371], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:15:14.825 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:14.825 | 99.99th=[41157] 00:15:14.825 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:15:14.825 slat (usec): min=7, max=29240, avg=68.71, stdev=1291.78 00:15:14.825 clat (usec): min=151, max=347, avg=196.94, stdev=19.91 00:15:14.825 lat (usec): min=163, max=29536, avg=265.64, stdev=1296.33 00:15:14.825 clat percentiles (usec): 00:15:14.825 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 178], 20.00th=[ 184], 00:15:14.825 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:15:14.825 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:15:14.825 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 347], 99.95th=[ 347], 00:15:14.825 | 99.99th=[ 347] 00:15:14.825 bw ( KiB/s): min= 4096, max= 4096, per=29.23%, avg=4096.00, stdev= 0.00, samples=1 00:15:14.825 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:14.825 lat (usec) : 250=82.76%, 500=13.63%, 750=0.16% 00:15:14.825 lat (msec) : 50=3.45% 00:15:14.825 cpu : usr=0.29%, sys=1.17%, ctx=612, majf=0, minf=1 00:15:14.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.825 issued rwts: total=97,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.825 00:15:14.825 Run status group 0 (all jobs): 00:15:14.825 READ: bw=7914KiB/s (8104kB/s), 87.0KiB/s-7524KiB/s (89.0kB/s-7705kB/s), io=8096KiB (8290kB), run=1001-1023msec 00:15:14.825 WRITE: bw=13.7MiB/s (14.3MB/s), 2002KiB/s-8184KiB/s (2050kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1023msec 00:15:14.825 00:15:14.825 Disk stats (read/write): 00:15:14.825 nvme0n1: ios=1559/1919, merge=0/0, ticks=1324/321, in_queue=1645, util=86.06% 00:15:14.825 nvme0n2: ios=67/512, merge=0/0, ticks=1620/95, in_queue=1715, util=90.04% 00:15:14.825 nvme0n3: ios=75/512, merge=0/0, ticks=1879/95, in_queue=1974, util=93.44% 00:15:14.825 nvme0n4: ios=152/512, merge=0/0, ticks=930/93, in_queue=1023, util=95.49% 00:15:14.825 18:40:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:14.825 [global] 00:15:14.825 thread=1 00:15:14.825 invalidate=1 00:15:14.825 rw=randwrite 00:15:14.825 time_based=1 00:15:14.825 runtime=1 00:15:14.825 ioengine=libaio 00:15:14.825 direct=1 00:15:14.825 bs=4096 00:15:14.825 iodepth=1 00:15:14.825 norandommap=0 00:15:14.825 numjobs=1 00:15:14.825 00:15:14.825 verify_dump=1 00:15:14.825 verify_backlog=512 00:15:14.825 verify_state_save=0 00:15:14.825 do_verify=1 00:15:14.825 verify=crc32c-intel 00:15:14.826 [job0] 00:15:14.826 filename=/dev/nvme0n1 00:15:14.826 [job1] 00:15:14.826 filename=/dev/nvme0n2 00:15:14.826 [job2] 00:15:14.826 filename=/dev/nvme0n3 00:15:14.826 [job3] 00:15:14.826 filename=/dev/nvme0n4 00:15:14.826 Could not set queue depth (nvme0n1) 00:15:14.826 Could not set queue depth (nvme0n2) 00:15:14.826 Could not set queue depth (nvme0n3) 00:15:14.826 Could not set queue depth (nvme0n4) 00:15:14.826 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.826 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.826 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.826 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.826 fio-3.35 00:15:14.826 Starting 4 threads 00:15:16.204 00:15:16.204 job0: (groupid=0, jobs=1): err= 0: pid=1076964: Mon Jul 15 18:40:32 2024 00:15:16.204 read: IOPS=321, BW=1286KiB/s (1317kB/s)(1304KiB/1014msec) 00:15:16.204 slat (nsec): min=6536, max=26849, avg=8583.95, stdev=3531.06 00:15:16.204 clat (usec): min=248, max=41456, avg=2739.11, stdev=9527.67 00:15:16.204 lat (usec): min=255, max=41463, avg=2747.70, stdev=9530.54 00:15:16.204 clat percentiles (usec): 00:15:16.204 | 1.00th=[ 255], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 277], 00:15:16.204 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 388], 60.00th=[ 429], 00:15:16.204 | 70.00th=[ 441], 80.00th=[ 457], 90.00th=[ 506], 95.00th=[40633], 00:15:16.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:16.204 | 99.99th=[41681] 00:15:16.204 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:15:16.204 slat (nsec): min=8724, max=60285, avg=11075.54, stdev=4752.02 00:15:16.204 clat (usec): min=141, max=491, avg=214.66, stdev=27.84 00:15:16.204 lat (usec): min=170, max=531, avg=225.73, stdev=28.26 00:15:16.204 clat percentiles (usec): 00:15:16.204 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 192], 00:15:16.204 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 221], 00:15:16.204 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 255], 00:15:16.204 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 490], 99.95th=[ 490], 00:15:16.204 | 99.99th=[ 490] 00:15:16.204 bw ( KiB/s): min= 4096, max= 4096, per=20.53%, avg=4096.00, stdev= 0.00, samples=1 00:15:16.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:16.204 lat (usec) : 250=57.28%, 500=38.42%, 750=2.03% 00:15:16.204 lat (msec) : 50=2.27% 00:15:16.204 cpu : usr=0.49%, sys=0.69%, ctx=838, majf=0, minf=2 00:15:16.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.204 issued rwts: total=326,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.204 job1: (groupid=0, jobs=1): err= 0: pid=1076965: Mon Jul 15 18:40:32 2024 00:15:16.204 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:15:16.204 slat (nsec): min=9873, max=22380, avg=14119.18, stdev=3768.88 00:15:16.204 clat (usec): min=40898, max=42000, avg=41089.68, stdev=305.71 00:15:16.204 lat (usec): min=40912, max=42014, avg=41103.80, stdev=304.88 00:15:16.204 clat percentiles (usec): 00:15:16.204 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:16.204 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:16.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:15:16.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:16.204 | 99.99th=[42206] 00:15:16.204 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:15:16.204 slat (nsec): min=9429, max=59217, avg=12176.60, stdev=2963.88 00:15:16.204 clat (usec): min=178, max=441, avg=221.91, stdev=32.07 00:15:16.204 lat (usec): min=190, max=486, avg=234.09, stdev=32.44 00:15:16.204 clat percentiles (usec): 00:15:16.204 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 196], 00:15:16.204 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 223], 00:15:16.204 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 289], 00:15:16.204 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 441], 99.95th=[ 441], 00:15:16.204 | 99.99th=[ 441] 00:15:16.204 bw ( KiB/s): min= 4096, max= 4096, per=20.53%, avg=4096.00, stdev= 0.00, samples=1 00:15:16.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:16.204 lat (usec) : 250=82.02%, 500=13.86% 00:15:16.204 lat (msec) : 50=4.12% 00:15:16.204 cpu : usr=0.59%, sys=0.29%, ctx=536, majf=0, minf=1 00:15:16.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.204 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.204 job2: (groupid=0, jobs=1): err= 0: pid=1076966: Mon Jul 15 18:40:32 2024 00:15:16.204 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:16.204 slat (nsec): min=6580, max=25061, avg=7770.81, stdev=1243.27 00:15:16.204 clat (usec): min=288, max=567, avg=357.73, stdev=34.07 00:15:16.204 lat (usec): min=296, max=591, avg=365.50, stdev=34.22 00:15:16.204 clat percentiles (usec): 00:15:16.204 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 334], 00:15:16.204 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:15:16.204 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 429], 00:15:16.204 | 99.00th=[ 457], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 570], 00:15:16.204 | 99.99th=[ 570] 00:15:16.204 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8184KiB/1001msec); 0 zone resets 00:15:16.204 slat (nsec): min=9347, max=36598, avg=11159.77, stdev=1827.01 00:15:16.205 clat (usec): min=132, max=418, avg=198.91, stdev=29.39 00:15:16.205 lat (usec): min=144, max=454, avg=210.07, stdev=29.91 00:15:16.205 clat percentiles (usec): 00:15:16.205 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:15:16.205 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:15:16.205 | 70.00th=[ 206], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 249], 00:15:16.205 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 392], 99.95th=[ 416], 00:15:16.205 | 99.99th=[ 420] 00:15:16.205 bw ( KiB/s): min= 8192, max= 8192, per=41.06%, avg=8192.00, stdev= 0.00, samples=1 00:15:16.205 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:16.205 lat (usec) : 250=54.66%, 500=45.23%, 750=0.11% 00:15:16.205 cpu : usr=2.50%, sys=3.00%, ctx=3583, majf=0, minf=1 00:15:16.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.205 issued rwts: total=1536,2046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.205 job3: (groupid=0, jobs=1): err= 0: pid=1076967: Mon Jul 15 18:40:32 2024 00:15:16.205 read: IOPS=1742, BW=6968KiB/s (7135kB/s)(6996KiB/1004msec) 00:15:16.205 slat (nsec): min=6315, max=26228, avg=7411.55, stdev=1488.07 00:15:16.205 clat (usec): min=235, max=41501, avg=327.65, stdev=986.65 00:15:16.205 lat (usec): min=242, max=41525, avg=335.06, stdev=987.04 00:15:16.205 clat percentiles (usec): 00:15:16.205 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:15:16.205 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 297], 00:15:16.205 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 379], 95.00th=[ 437], 00:15:16.205 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 668], 99.95th=[41681], 00:15:16.205 | 99.99th=[41681] 00:15:16.205 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:15:16.205 slat (nsec): min=9089, max=39484, avg=10307.74, stdev=1506.90 00:15:16.205 clat (usec): min=149, max=322, avg=189.19, stdev=23.81 00:15:16.205 lat (usec): min=160, max=361, avg=199.49, stdev=24.07 00:15:16.205 clat percentiles (usec): 00:15:16.205 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:15:16.205 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:15:16.205 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 227], 95.00th=[ 239], 00:15:16.205 | 99.00th=[ 258], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 318], 00:15:16.205 | 99.99th=[ 322] 00:15:16.205 bw ( KiB/s): min= 8192, max= 8192, per=41.06%, avg=8192.00, stdev= 0.00, samples=2 00:15:16.205 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:15:16.205 lat (usec) : 250=54.15%, 500=45.30%, 750=0.53% 00:15:16.205 lat (msec) : 50=0.03% 00:15:16.205 cpu : usr=1.60%, sys=3.79%, ctx=3798, majf=0, minf=1 00:15:16.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.205 issued rwts: total=1749,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.205 00:15:16.205 Run status group 0 (all jobs): 00:15:16.205 READ: bw=13.8MiB/s (14.5MB/s), 85.8KiB/s-6968KiB/s (87.8kB/s-7135kB/s), io=14.2MiB (14.9MB), run=1001-1026msec 00:15:16.205 WRITE: bw=19.5MiB/s (20.4MB/s), 1996KiB/s-8176KiB/s (2044kB/s-8372kB/s), io=20.0MiB (21.0MB), run=1001-1026msec 00:15:16.205 00:15:16.205 Disk stats (read/write): 00:15:16.205 nvme0n1: ios=369/512, merge=0/0, ticks=674/105, in_queue=779, util=82.67% 00:15:16.205 nvme0n2: ios=52/512, merge=0/0, ticks=1087/112, in_queue=1199, util=98.87% 00:15:16.205 nvme0n3: ios=1298/1536, merge=0/0, ticks=1402/298, in_queue=1700, util=97.73% 00:15:16.205 nvme0n4: ios=1578/1566, merge=0/0, ticks=1347/293, in_queue=1640, util=98.13% 00:15:16.205 18:40:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:16.205 [global] 00:15:16.205 thread=1 00:15:16.205 invalidate=1 00:15:16.205 rw=write 00:15:16.205 time_based=1 00:15:16.205 runtime=1 00:15:16.205 ioengine=libaio 00:15:16.205 direct=1 00:15:16.205 bs=4096 00:15:16.205 iodepth=128 00:15:16.205 norandommap=0 00:15:16.205 numjobs=1 00:15:16.205 00:15:16.205 verify_dump=1 00:15:16.205 verify_backlog=512 00:15:16.205 verify_state_save=0 00:15:16.205 do_verify=1 00:15:16.205 verify=crc32c-intel 00:15:16.205 [job0] 00:15:16.205 filename=/dev/nvme0n1 00:15:16.205 [job1] 00:15:16.205 filename=/dev/nvme0n2 00:15:16.205 [job2] 00:15:16.205 filename=/dev/nvme0n3 00:15:16.205 [job3] 00:15:16.205 filename=/dev/nvme0n4 00:15:16.205 Could not set queue depth (nvme0n1) 00:15:16.205 Could not set queue depth (nvme0n2) 00:15:16.205 Could not set queue depth (nvme0n3) 00:15:16.205 Could not set queue depth (nvme0n4) 00:15:16.464 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:16.464 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:16.464 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:16.464 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:16.464 fio-3.35 00:15:16.464 Starting 4 threads 00:15:17.840 00:15:17.840 job0: (groupid=0, jobs=1): err= 0: pid=1077337: Mon Jul 15 18:40:34 2024 00:15:17.840 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:15:17.840 slat (nsec): min=1304, max=17018k, avg=112773.47, stdev=813319.31 00:15:17.840 clat (usec): min=3655, max=46451, avg=13347.98, stdev=5501.50 00:15:17.840 lat (usec): min=3663, max=46454, avg=13460.75, stdev=5564.78 00:15:17.840 clat percentiles (usec): 00:15:17.840 | 1.00th=[ 5800], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 9503], 00:15:17.840 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11863], 60.00th=[12125], 00:15:17.840 | 70.00th=[14484], 80.00th=[15926], 90.00th=[19530], 95.00th=[26084], 00:15:17.840 | 99.00th=[32375], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:15:17.840 | 99.99th=[46400] 00:15:17.840 write: IOPS=5003, BW=19.5MiB/s (20.5MB/s)(19.7MiB/1010msec); 0 zone resets 00:15:17.840 slat (usec): min=2, max=11862, avg=90.16, stdev=439.44 00:15:17.840 clat (usec): min=1550, max=55620, avg=13167.35, stdev=8123.31 00:15:17.840 lat (usec): min=1564, max=55626, avg=13257.51, stdev=8158.18 00:15:17.840 clat percentiles (usec): 00:15:17.840 | 1.00th=[ 3589], 5.00th=[ 5800], 10.00th=[ 7242], 20.00th=[ 8979], 00:15:17.840 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10945], 60.00th=[11338], 00:15:17.840 | 70.00th=[12649], 80.00th=[16450], 90.00th=[20055], 95.00th=[26870], 00:15:17.840 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:15:17.840 | 99.99th=[55837] 00:15:17.840 bw ( KiB/s): min=18936, max=20480, per=26.69%, avg=19708.00, stdev=1091.77, samples=2 00:15:17.840 iops : min= 4734, max= 5120, avg=4927.00, stdev=272.94, samples=2 00:15:17.840 lat (msec) : 2=0.02%, 4=1.03%, 10=32.44%, 20=56.86%, 50=8.91% 00:15:17.840 lat (msec) : 100=0.73% 00:15:17.840 cpu : usr=3.07%, sys=5.05%, ctx=655, majf=0, minf=1 00:15:17.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:17.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.840 issued rwts: total=4608,5054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.840 job1: (groupid=0, jobs=1): err= 0: pid=1077338: Mon Jul 15 18:40:34 2024 00:15:17.840 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.2MiB/1010msec) 00:15:17.840 slat (nsec): min=1028, max=15008k, avg=92367.25, stdev=758309.56 00:15:17.840 clat (usec): min=2419, max=44112, avg=13395.43, stdev=5647.63 00:15:17.840 lat (usec): min=2427, max=46904, avg=13487.80, stdev=5698.73 00:15:17.840 clat percentiles (usec): 00:15:17.840 | 1.00th=[ 4555], 5.00th=[ 6128], 10.00th=[ 7504], 20.00th=[ 9634], 00:15:17.840 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11863], 60.00th=[12125], 00:15:17.840 | 70.00th=[15533], 80.00th=[18482], 90.00th=[20841], 95.00th=[24249], 00:15:17.840 | 99.00th=[28181], 99.50th=[28443], 99.90th=[43779], 99.95th=[43779], 00:15:17.840 | 99.99th=[44303] 00:15:17.840 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:15:17.840 slat (nsec): min=1880, max=14913k, avg=66843.95, stdev=483944.08 00:15:17.840 clat (usec): min=1082, max=26792, avg=10473.95, stdev=4214.76 00:15:17.840 lat (usec): min=1090, max=27308, avg=10540.80, stdev=4237.66 00:15:17.840 clat percentiles (usec): 00:15:17.840 | 1.00th=[ 1811], 5.00th=[ 4113], 10.00th=[ 5211], 20.00th=[ 7046], 00:15:17.840 | 30.00th=[ 8717], 40.00th=[10028], 50.00th=[10421], 60.00th=[11076], 00:15:17.840 | 70.00th=[11338], 80.00th=[12387], 90.00th=[15795], 95.00th=[19268], 00:15:17.840 | 99.00th=[23200], 99.50th=[23725], 99.90th=[25035], 99.95th=[25297], 00:15:17.840 | 99.99th=[26870] 00:15:17.840 bw ( KiB/s): min=20344, max=24064, per=30.07%, avg=22204.00, stdev=2630.44, samples=2 00:15:17.840 iops : min= 5086, max= 6016, avg=5551.00, stdev=657.61, samples=2 00:15:17.840 lat (msec) : 2=0.58%, 4=2.16%, 10=30.80%, 20=58.32%, 50=8.14% 00:15:17.840 cpu : usr=3.67%, sys=4.66%, ctx=611, majf=0, minf=1 00:15:17.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:17.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.840 issued rwts: total=5166,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.840 job2: (groupid=0, jobs=1): err= 0: pid=1077340: Mon Jul 15 18:40:34 2024 00:15:17.840 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:15:17.840 slat (nsec): min=1317, max=31991k, avg=111927.74, stdev=935115.36 00:15:17.840 clat (usec): min=4615, max=46014, avg=15094.36, stdev=6551.93 00:15:17.840 lat (usec): min=4622, max=46041, avg=15206.29, stdev=6611.66 00:15:17.840 clat percentiles (usec): 00:15:17.840 | 1.00th=[ 6259], 5.00th=[ 7111], 10.00th=[ 9896], 20.00th=[11076], 00:15:17.840 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13435], 60.00th=[14222], 00:15:17.840 | 70.00th=[15270], 80.00th=[17695], 90.00th=[21890], 95.00th=[24773], 00:15:17.840 | 99.00th=[44303], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:15:17.840 | 99.99th=[45876] 00:15:17.840 write: IOPS=4476, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1006msec); 0 zone resets 00:15:17.840 slat (usec): min=2, max=16421, avg=104.00, stdev=709.30 00:15:17.840 clat (usec): min=510, max=39955, avg=14612.68, stdev=6837.18 00:15:17.840 lat (usec): min=559, max=39963, avg=14716.68, stdev=6891.57 00:15:17.840 clat percentiles (usec): 00:15:17.840 | 1.00th=[ 3621], 5.00th=[ 5997], 10.00th=[ 7701], 20.00th=[ 9241], 00:15:17.840 | 30.00th=[11207], 40.00th=[12125], 50.00th=[13042], 60.00th=[14615], 00:15:17.840 | 70.00th=[16450], 80.00th=[18482], 90.00th=[21627], 95.00th=[31589], 00:15:17.840 | 99.00th=[37487], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:15:17.840 | 99.99th=[40109] 00:15:17.840 bw ( KiB/s): min=16384, max=18624, per=23.70%, avg=17504.00, stdev=1583.92, samples=2 00:15:17.840 iops : min= 4096, max= 4656, avg=4376.00, stdev=395.98, samples=2 00:15:17.840 lat (usec) : 750=0.01% 00:15:17.840 lat (msec) : 2=0.02%, 4=0.70%, 10=18.96%, 20=63.27%, 50=17.04% 00:15:17.840 cpu : usr=3.48%, sys=4.88%, ctx=453, majf=0, minf=1 00:15:17.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:17.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.840 issued rwts: total=4096,4503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.840 job3: (groupid=0, jobs=1): err= 0: pid=1077341: Mon Jul 15 18:40:34 2024 00:15:17.840 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:15:17.840 slat (nsec): min=1059, max=20367k, avg=138515.79, stdev=955192.79 00:15:17.840 clat (usec): min=7391, max=49702, avg=18026.72, stdev=6883.49 00:15:17.840 lat (usec): min=7399, max=53344, avg=18165.24, stdev=6940.09 00:15:17.840 clat percentiles (usec): 00:15:17.840 | 1.00th=[ 8356], 5.00th=[11207], 10.00th=[11994], 20.00th=[13435], 00:15:17.840 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15401], 60.00th=[17695], 00:15:17.840 | 70.00th=[19792], 80.00th=[21365], 90.00th=[28705], 95.00th=[34341], 00:15:17.840 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:15:17.840 | 99.99th=[49546] 00:15:17.841 write: IOPS=3443, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1004msec); 0 zone resets 00:15:17.841 slat (usec): min=2, max=19587, avg=159.48, stdev=1014.85 00:15:17.841 clat (usec): min=454, max=90070, avg=20799.75, stdev=13274.51 00:15:17.841 lat (usec): min=3327, max=90079, avg=20959.23, stdev=13347.45 00:15:17.841 clat percentiles (usec): 00:15:17.841 | 1.00th=[ 6718], 5.00th=[10552], 10.00th=[12387], 20.00th=[13042], 00:15:17.841 | 30.00th=[13304], 40.00th=[14091], 50.00th=[17695], 60.00th=[20317], 00:15:17.841 | 70.00th=[21890], 80.00th=[24511], 90.00th=[28181], 95.00th=[43254], 00:15:17.841 | 99.00th=[85459], 99.50th=[87557], 99.90th=[89654], 99.95th=[89654], 00:15:17.841 | 99.99th=[89654] 00:15:17.841 bw ( KiB/s): min=11704, max=14928, per=18.03%, avg=13316.00, stdev=2279.71, samples=2 00:15:17.841 iops : min= 2926, max= 3732, avg=3329.00, stdev=569.93, samples=2 00:15:17.841 lat (usec) : 500=0.02% 00:15:17.841 lat (msec) : 4=0.49%, 10=2.45%, 20=62.83%, 50=32.21%, 100=2.01% 00:15:17.841 cpu : usr=2.89%, sys=2.49%, ctx=397, majf=0, minf=1 00:15:17.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:17.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.841 issued rwts: total=3072,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.841 00:15:17.841 Run status group 0 (all jobs): 00:15:17.841 READ: bw=65.5MiB/s (68.7MB/s), 12.0MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=66.2MiB (69.4MB), run=1004-1010msec 00:15:17.841 WRITE: bw=72.1MiB/s (75.6MB/s), 13.4MiB/s-21.8MiB/s (14.1MB/s-22.8MB/s), io=72.8MiB (76.4MB), run=1004-1010msec 00:15:17.841 00:15:17.841 Disk stats (read/write): 00:15:17.841 nvme0n1: ios=3851/4096, merge=0/0, ticks=51013/55519, in_queue=106532, util=96.79% 00:15:17.841 nvme0n2: ios=4397/4608, merge=0/0, ticks=48167/41010, in_queue=89177, util=97.46% 00:15:17.841 nvme0n3: ios=3567/3584, merge=0/0, ticks=42991/47659, in_queue=90650, util=98.54% 00:15:17.841 nvme0n4: ios=2602/2575, merge=0/0, ticks=29561/40665, in_queue=70226, util=96.23% 00:15:17.841 18:40:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:17.841 [global] 00:15:17.841 thread=1 00:15:17.841 invalidate=1 00:15:17.841 rw=randwrite 00:15:17.841 time_based=1 00:15:17.841 runtime=1 00:15:17.841 ioengine=libaio 00:15:17.841 direct=1 00:15:17.841 bs=4096 00:15:17.841 iodepth=128 00:15:17.841 norandommap=0 00:15:17.841 numjobs=1 00:15:17.841 00:15:17.841 verify_dump=1 00:15:17.841 verify_backlog=512 00:15:17.841 verify_state_save=0 00:15:17.841 do_verify=1 00:15:17.841 verify=crc32c-intel 00:15:17.841 [job0] 00:15:17.841 filename=/dev/nvme0n1 00:15:17.841 [job1] 00:15:17.841 filename=/dev/nvme0n2 00:15:17.841 [job2] 00:15:17.841 filename=/dev/nvme0n3 00:15:17.841 [job3] 00:15:17.841 filename=/dev/nvme0n4 00:15:17.841 Could not set queue depth (nvme0n1) 00:15:17.841 Could not set queue depth (nvme0n2) 00:15:17.841 Could not set queue depth (nvme0n3) 00:15:17.841 Could not set queue depth (nvme0n4) 00:15:18.099 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.099 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.099 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.099 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.099 fio-3.35 00:15:18.099 Starting 4 threads 00:15:19.491 00:15:19.491 job0: (groupid=0, jobs=1): err= 0: pid=1077709: Mon Jul 15 18:40:35 2024 00:15:19.491 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:15:19.491 slat (nsec): min=1034, max=30572k, avg=177975.94, stdev=1427538.38 00:15:19.491 clat (usec): min=6444, max=80849, avg=21556.19, stdev=14619.73 00:15:19.491 lat (usec): min=6447, max=80876, avg=21734.16, stdev=14715.07 00:15:19.491 clat percentiles (usec): 00:15:19.491 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[10552], 00:15:19.491 | 30.00th=[12387], 40.00th=[15401], 50.00th=[16712], 60.00th=[18744], 00:15:19.491 | 70.00th=[21890], 80.00th=[25822], 90.00th=[43254], 95.00th=[55837], 00:15:19.491 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[72877], 00:15:19.491 | 99.99th=[81265] 00:15:19.491 write: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(12.8MiB/1012msec); 0 zone resets 00:15:19.491 slat (nsec): min=1904, max=17800k, avg=131412.14, stdev=945593.90 00:15:19.491 clat (usec): min=1504, max=68551, avg=18835.99, stdev=11196.40 00:15:19.491 lat (usec): min=1508, max=68558, avg=18967.41, stdev=11275.21 00:15:19.491 clat percentiles (usec): 00:15:19.491 | 1.00th=[ 6915], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10159], 00:15:19.491 | 30.00th=[10814], 40.00th=[12256], 50.00th=[15139], 60.00th=[17957], 00:15:19.491 | 70.00th=[21365], 80.00th=[25035], 90.00th=[35914], 95.00th=[43779], 00:15:19.491 | 99.00th=[56886], 99.50th=[58983], 99.90th=[62653], 99.95th=[63177], 00:15:19.491 | 99.99th=[68682] 00:15:19.491 bw ( KiB/s): min=10360, max=14800, per=17.42%, avg=12580.00, stdev=3139.55, samples=2 00:15:19.491 iops : min= 2590, max= 3700, avg=3145.00, stdev=784.89, samples=2 00:15:19.491 lat (msec) : 2=0.13%, 10=13.03%, 20=50.95%, 50=31.39%, 100=4.49% 00:15:19.491 cpu : usr=1.88%, sys=3.07%, ctx=353, majf=0, minf=1 00:15:19.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:19.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.492 issued rwts: total=3072,3273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.492 job1: (groupid=0, jobs=1): err= 0: pid=1077710: Mon Jul 15 18:40:35 2024 00:15:19.492 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:15:19.492 slat (nsec): min=1280, max=6332.7k, avg=77740.75, stdev=446851.43 00:15:19.492 clat (usec): min=5296, max=19961, avg=10143.99, stdev=1671.43 00:15:19.492 lat (usec): min=5300, max=19969, avg=10221.73, stdev=1700.64 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[ 6652], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 8979], 00:15:19.492 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:15:19.492 | 70.00th=[10552], 80.00th=[11207], 90.00th=[12387], 95.00th=[13173], 00:15:19.492 | 99.00th=[15008], 99.50th=[15401], 99.90th=[17695], 99.95th=[17695], 00:15:19.492 | 99.99th=[20055] 00:15:19.492 write: IOPS=6265, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1005msec); 0 zone resets 00:15:19.492 slat (usec): min=2, max=7446, avg=78.06, stdev=455.33 00:15:19.492 clat (usec): min=980, max=21521, avg=10255.65, stdev=2047.68 00:15:19.492 lat (usec): min=4916, max=21557, avg=10333.71, stdev=2070.70 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 8586], 20.00th=[ 9241], 00:15:19.492 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:15:19.492 | 70.00th=[10290], 80.00th=[11076], 90.00th=[13304], 95.00th=[14877], 00:15:19.492 | 99.00th=[16581], 99.50th=[16581], 99.90th=[17695], 99.95th=[20841], 00:15:19.492 | 99.99th=[21627] 00:15:19.492 bw ( KiB/s): min=23768, max=25592, per=34.17%, avg=24680.00, stdev=1289.76, samples=2 00:15:19.492 iops : min= 5942, max= 6398, avg=6170.00, stdev=322.44, samples=2 00:15:19.492 lat (usec) : 1000=0.01% 00:15:19.492 lat (msec) : 10=57.46%, 20=42.50%, 50=0.03% 00:15:19.492 cpu : usr=4.88%, sys=6.08%, ctx=576, majf=0, minf=1 00:15:19.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:19.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.492 issued rwts: total=6144,6297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.492 job2: (groupid=0, jobs=1): err= 0: pid=1077711: Mon Jul 15 18:40:35 2024 00:15:19.492 read: IOPS=4505, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1008msec) 00:15:19.492 slat (nsec): min=1334, max=14407k, avg=113613.81, stdev=840774.00 00:15:19.492 clat (usec): min=3862, max=33780, avg=14763.67, stdev=4632.60 00:15:19.492 lat (usec): min=5295, max=34592, avg=14877.29, stdev=4685.70 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[ 5604], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[11469], 00:15:19.492 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13698], 60.00th=[14877], 00:15:19.492 | 70.00th=[16581], 80.00th=[18482], 90.00th=[20841], 95.00th=[24773], 00:15:19.492 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28443], 99.95th=[29754], 00:15:19.492 | 99.99th=[33817] 00:15:19.492 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:15:19.492 slat (nsec): min=1951, max=15386k, avg=89628.97, stdev=670893.15 00:15:19.492 clat (usec): min=1262, max=33055, avg=13139.72, stdev=4981.32 00:15:19.492 lat (usec): min=1272, max=33060, avg=13229.35, stdev=5020.69 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[ 3949], 5.00th=[ 6783], 10.00th=[ 7963], 20.00th=[10159], 00:15:19.492 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[12256], 00:15:19.492 | 70.00th=[13960], 80.00th=[16909], 90.00th=[21103], 95.00th=[22152], 00:15:19.492 | 99.00th=[28705], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:15:19.492 | 99.99th=[33162] 00:15:19.492 bw ( KiB/s): min=17576, max=19288, per=25.52%, avg=18432.00, stdev=1210.57, samples=2 00:15:19.492 iops : min= 4394, max= 4822, avg=4608.00, stdev=302.64, samples=2 00:15:19.492 lat (msec) : 2=0.02%, 4=0.63%, 10=13.84%, 20=73.09%, 50=12.42% 00:15:19.492 cpu : usr=3.77%, sys=5.36%, ctx=332, majf=0, minf=1 00:15:19.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:19.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.492 issued rwts: total=4542,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.492 job3: (groupid=0, jobs=1): err= 0: pid=1077712: Mon Jul 15 18:40:35 2024 00:15:19.492 read: IOPS=3885, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1002msec) 00:15:19.492 slat (nsec): min=1463, max=13578k, avg=117773.52, stdev=728196.74 00:15:19.492 clat (usec): min=852, max=45305, avg=14217.60, stdev=6047.80 00:15:19.492 lat (usec): min=3992, max=45324, avg=14335.37, stdev=6100.43 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[ 4490], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10683], 00:15:19.492 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:15:19.492 | 70.00th=[13304], 80.00th=[17433], 90.00th=[24249], 95.00th=[26870], 00:15:19.492 | 99.00th=[35914], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:15:19.492 | 99.99th=[45351] 00:15:19.492 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:15:19.492 slat (usec): min=2, max=11426, avg=124.54, stdev=640.94 00:15:19.492 clat (usec): min=1889, max=90496, avg=17238.16, stdev=13745.01 00:15:19.492 lat (usec): min=1918, max=90543, avg=17362.71, stdev=13831.49 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[ 8356], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:15:19.492 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[12387], 00:15:19.492 | 70.00th=[14353], 80.00th=[18744], 90.00th=[29230], 95.00th=[41157], 00:15:19.492 | 99.00th=[85459], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:15:19.492 | 99.99th=[90702] 00:15:19.492 bw ( KiB/s): min=13584, max=19184, per=22.68%, avg=16384.00, stdev=3959.80, samples=2 00:15:19.492 iops : min= 3396, max= 4796, avg=4096.00, stdev=989.95, samples=2 00:15:19.492 lat (usec) : 1000=0.01% 00:15:19.492 lat (msec) : 2=0.03%, 4=0.21%, 10=7.03%, 20=75.17%, 50=15.27% 00:15:19.492 lat (msec) : 100=2.28% 00:15:19.492 cpu : usr=2.10%, sys=4.60%, ctx=484, majf=0, minf=1 00:15:19.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:19.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.492 issued rwts: total=3893,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.492 00:15:19.492 Run status group 0 (all jobs): 00:15:19.492 READ: bw=68.1MiB/s (71.4MB/s), 11.9MiB/s-23.9MiB/s (12.4MB/s-25.0MB/s), io=68.9MiB (72.3MB), run=1002-1012msec 00:15:19.492 WRITE: bw=70.5MiB/s (74.0MB/s), 12.6MiB/s-24.5MiB/s (13.2MB/s-25.7MB/s), io=71.4MiB (74.8MB), run=1002-1012msec 00:15:19.492 00:15:19.492 Disk stats (read/write): 00:15:19.492 nvme0n1: ios=2730/3072, merge=0/0, ticks=20231/17370, in_queue=37601, util=98.40% 00:15:19.492 nvme0n2: ios=5140/5437, merge=0/0, ticks=26926/26185, in_queue=53111, util=98.48% 00:15:19.492 nvme0n3: ios=3590/4096, merge=0/0, ticks=38000/40097, in_queue=78097, util=89.41% 00:15:19.492 nvme0n4: ios=3112/3311, merge=0/0, ticks=17556/23832, in_queue=41388, util=96.34% 00:15:19.492 18:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:19.492 18:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1077942 00:15:19.492 18:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:19.492 18:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:19.492 [global] 00:15:19.492 thread=1 00:15:19.492 invalidate=1 00:15:19.492 rw=read 00:15:19.492 time_based=1 00:15:19.492 runtime=10 00:15:19.492 ioengine=libaio 00:15:19.492 direct=1 00:15:19.492 bs=4096 00:15:19.492 iodepth=1 00:15:19.492 norandommap=1 00:15:19.492 numjobs=1 00:15:19.492 00:15:19.492 [job0] 00:15:19.492 filename=/dev/nvme0n1 00:15:19.492 [job1] 00:15:19.492 filename=/dev/nvme0n2 00:15:19.492 [job2] 00:15:19.492 filename=/dev/nvme0n3 00:15:19.492 [job3] 00:15:19.492 filename=/dev/nvme0n4 00:15:19.492 Could not set queue depth (nvme0n1) 00:15:19.492 Could not set queue depth (nvme0n2) 00:15:19.492 Could not set queue depth (nvme0n3) 00:15:19.492 Could not set queue depth (nvme0n4) 00:15:19.752 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.752 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.752 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.752 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.752 fio-3.35 00:15:19.752 Starting 4 threads 00:15:22.275 18:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:22.532 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35770368, buflen=4096 00:15:22.532 fio: pid=1078087, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:22.532 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:22.790 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=37240832, buflen=4096 00:15:22.790 fio: pid=1078086, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:22.790 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:22.790 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:23.049 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:23.049 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:23.049 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3706880, buflen=4096 00:15:23.049 fio: pid=1078084, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:23.049 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:23.049 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:23.049 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=34058240, buflen=4096 00:15:23.049 fio: pid=1078085, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:23.049 00:15:23.049 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1078084: Mon Jul 15 18:40:39 2024 00:15:23.049 read: IOPS=288, BW=1152KiB/s (1180kB/s)(3620KiB/3142msec) 00:15:23.049 slat (usec): min=2, max=15887, avg=38.42, stdev=743.10 00:15:23.049 clat (usec): min=225, max=42056, avg=3409.39, stdev=10817.36 00:15:23.049 lat (usec): min=228, max=42065, avg=3430.43, stdev=10828.24 00:15:23.049 clat percentiles (usec): 00:15:23.049 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 269], 00:15:23.049 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 314], 00:15:23.049 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 412], 95.00th=[41157], 00:15:23.049 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:23.049 | 99.99th=[42206] 00:15:23.049 bw ( KiB/s): min= 96, max= 6704, per=3.69%, avg=1201.67, stdev=2695.59, samples=6 00:15:23.049 iops : min= 24, max= 1676, avg=300.33, stdev=673.94, samples=6 00:15:23.049 lat (usec) : 250=9.60%, 500=82.01%, 750=0.55%, 1000=0.11% 00:15:23.049 lat (msec) : 50=7.62% 00:15:23.049 cpu : usr=0.03%, sys=0.19%, ctx=910, majf=0, minf=1 00:15:23.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:23.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 issued rwts: total=906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:23.049 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1078085: Mon Jul 15 18:40:39 2024 00:15:23.049 read: IOPS=2502, BW=9.77MiB/s (10.2MB/s)(32.5MiB/3323msec) 00:15:23.049 slat (usec): min=3, max=15520, avg=12.09, stdev=228.10 00:15:23.049 clat (usec): min=181, max=42091, avg=383.37, stdev=1553.82 00:15:23.049 lat (usec): min=186, max=52809, avg=395.47, stdev=1604.93 00:15:23.049 clat percentiles (usec): 00:15:23.049 | 1.00th=[ 233], 5.00th=[ 255], 10.00th=[ 281], 20.00th=[ 302], 00:15:23.049 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:15:23.049 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 375], 00:15:23.049 | 99.00th=[ 449], 99.50th=[ 498], 99.90th=[41157], 99.95th=[41157], 00:15:23.049 | 99.99th=[42206] 00:15:23.049 bw ( KiB/s): min= 6828, max=12760, per=33.38%, avg=10866.00, stdev=2093.30, samples=6 00:15:23.049 iops : min= 1707, max= 3190, avg=2716.50, stdev=523.33, samples=6 00:15:23.049 lat (usec) : 250=4.05%, 500=95.49%, 750=0.29%, 1000=0.01% 00:15:23.049 lat (msec) : 50=0.14% 00:15:23.049 cpu : usr=1.02%, sys=2.77%, ctx=8321, majf=0, minf=1 00:15:23.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:23.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 issued rwts: total=8316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:23.049 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1078086: Mon Jul 15 18:40:39 2024 00:15:23.049 read: IOPS=3115, BW=12.2MiB/s (12.8MB/s)(35.5MiB/2919msec) 00:15:23.049 slat (usec): min=2, max=15167, avg=11.29, stdev=214.58 00:15:23.049 clat (usec): min=177, max=41365, avg=305.39, stdev=432.03 00:15:23.049 lat (usec): min=180, max=41373, avg=316.68, stdev=482.06 00:15:23.049 clat percentiles (usec): 00:15:23.049 | 1.00th=[ 206], 5.00th=[ 239], 10.00th=[ 269], 20.00th=[ 285], 00:15:23.049 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:15:23.049 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 351], 00:15:23.049 | 99.00th=[ 404], 99.50th=[ 441], 99.90th=[ 510], 99.95th=[ 553], 00:15:23.049 | 99.99th=[41157] 00:15:23.049 bw ( KiB/s): min=11616, max=13016, per=38.00%, avg=12371.20, stdev=576.24, samples=5 00:15:23.049 iops : min= 2904, max= 3254, avg=3092.80, stdev=144.06, samples=5 00:15:23.049 lat (usec) : 250=6.87%, 500=92.98%, 750=0.11%, 1000=0.01% 00:15:23.049 lat (msec) : 50=0.01% 00:15:23.049 cpu : usr=0.89%, sys=3.46%, ctx=9100, majf=0, minf=1 00:15:23.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:23.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 issued rwts: total=9093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:23.049 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1078087: Mon Jul 15 18:40:39 2024 00:15:23.049 read: IOPS=3228, BW=12.6MiB/s (13.2MB/s)(34.1MiB/2705msec) 00:15:23.049 slat (nsec): min=6349, max=29135, avg=7336.90, stdev=944.92 00:15:23.049 clat (usec): min=203, max=41168, avg=300.19, stdev=802.41 00:15:23.049 lat (usec): min=210, max=41177, avg=307.53, stdev=802.66 00:15:23.049 clat percentiles (usec): 00:15:23.049 | 1.00th=[ 239], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 273], 00:15:23.049 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:15:23.049 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:15:23.049 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 404], 99.95th=[ 1631], 00:15:23.049 | 99.99th=[41157] 00:15:23.049 bw ( KiB/s): min= 9672, max=13752, per=39.39%, avg=12822.40, stdev=1769.14, samples=5 00:15:23.049 iops : min= 2418, max= 3438, avg=3205.60, stdev=442.29, samples=5 00:15:23.049 lat (usec) : 250=3.79%, 500=96.13%, 750=0.01% 00:15:23.049 lat (msec) : 2=0.01%, 50=0.05% 00:15:23.049 cpu : usr=0.81%, sys=2.96%, ctx=8737, majf=0, minf=2 00:15:23.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:23.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.049 issued rwts: total=8734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:23.049 00:15:23.049 Run status group 0 (all jobs): 00:15:23.049 READ: bw=31.8MiB/s (33.3MB/s), 1152KiB/s-12.6MiB/s (1180kB/s-13.2MB/s), io=106MiB (111MB), run=2705-3323msec 00:15:23.049 00:15:23.049 Disk stats (read/write): 00:15:23.049 nvme0n1: ios=904/0, merge=0/0, ticks=3045/0, in_queue=3045, util=95.01% 00:15:23.049 nvme0n2: ios=8310/0, merge=0/0, ticks=2924/0, in_queue=2924, util=95.32% 00:15:23.050 nvme0n3: ios=8949/0, merge=0/0, ticks=3073/0, in_queue=3073, util=99.19% 00:15:23.050 nvme0n4: ios=8394/0, merge=0/0, ticks=3565/0, in_queue=3565, util=98.66% 00:15:23.308 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:23.308 18:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:23.566 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:23.566 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:23.566 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:23.566 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:23.824 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:23.824 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1077942 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:24.083 nvmf hotplug test: fio failed as expected 00:15:24.083 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.340 18:40:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.340 rmmod nvme_tcp 00:15:24.340 rmmod nvme_fabrics 00:15:24.340 rmmod nvme_keyring 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1075213 ']' 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1075213 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1075213 ']' 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1075213 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.340 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1075213 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1075213' 00:15:24.599 killing process with pid 1075213 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1075213 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1075213 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.599 18:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.131 18:40:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.131 00:15:27.131 real 0m26.191s 00:15:27.131 user 1m45.737s 00:15:27.131 sys 0m7.914s 00:15:27.131 18:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.131 18:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.131 ************************************ 00:15:27.131 END TEST nvmf_fio_target 00:15:27.131 ************************************ 00:15:27.131 18:40:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:27.131 18:40:43 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:27.131 18:40:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.131 18:40:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.131 18:40:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.131 ************************************ 00:15:27.131 START TEST nvmf_bdevio 00:15:27.131 ************************************ 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:27.131 * Looking for test storage... 00:15:27.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.131 18:40:43 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.132 18:40:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:32.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:32.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.449 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:32.450 Found net devices under 0000:86:00.0: cvl_0_0 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:32.450 Found net devices under 0000:86:00.1: cvl_0_1 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:32.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:15:32.450 00:15:32.450 --- 10.0.0.2 ping statistics --- 00:15:32.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.450 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:15:32.450 00:15:32.450 --- 10.0.0.1 ping statistics --- 00:15:32.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.450 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1082333 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1082333 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1082333 ']' 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.450 18:40:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:32.450 [2024-07-15 18:40:48.850749] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:15:32.450 [2024-07-15 18:40:48.850795] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.450 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.450 [2024-07-15 18:40:48.906715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.450 [2024-07-15 18:40:48.985613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.450 [2024-07-15 18:40:48.985648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.450 [2024-07-15 18:40:48.985656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.450 [2024-07-15 18:40:48.985662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.450 [2024-07-15 18:40:48.985667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.450 [2024-07-15 18:40:48.985776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:32.450 [2024-07-15 18:40:48.985881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:32.450 [2024-07-15 18:40:48.985987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.450 [2024-07-15 18:40:48.985989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:33.018 [2024-07-15 18:40:49.716298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.018 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:33.276 Malloc0 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:33.276 [2024-07-15 18:40:49.767885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.276 18:40:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.277 { 00:15:33.277 "params": { 00:15:33.277 "name": "Nvme$subsystem", 00:15:33.277 "trtype": "$TEST_TRANSPORT", 00:15:33.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.277 "adrfam": "ipv4", 00:15:33.277 "trsvcid": "$NVMF_PORT", 00:15:33.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.277 "hdgst": ${hdgst:-false}, 00:15:33.277 "ddgst": ${ddgst:-false} 00:15:33.277 }, 00:15:33.277 "method": "bdev_nvme_attach_controller" 00:15:33.277 } 00:15:33.277 EOF 00:15:33.277 )") 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:33.277 18:40:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.277 "params": { 00:15:33.277 "name": "Nvme1", 00:15:33.277 "trtype": "tcp", 00:15:33.277 "traddr": "10.0.0.2", 00:15:33.277 "adrfam": "ipv4", 00:15:33.277 "trsvcid": "4420", 00:15:33.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.277 "hdgst": false, 00:15:33.277 "ddgst": false 00:15:33.277 }, 00:15:33.277 "method": "bdev_nvme_attach_controller" 00:15:33.277 }' 00:15:33.277 [2024-07-15 18:40:49.817606] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:15:33.277 [2024-07-15 18:40:49.817646] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082584 ] 00:15:33.277 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.277 [2024-07-15 18:40:49.872400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.277 [2024-07-15 18:40:49.947800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.277 [2024-07-15 18:40:49.947893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.277 [2024-07-15 18:40:49.947893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.535 I/O targets: 00:15:33.535 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:33.535 00:15:33.535 00:15:33.535 CUnit - A unit testing framework for C - Version 2.1-3 00:15:33.535 http://cunit.sourceforge.net/ 00:15:33.535 00:15:33.535 00:15:33.535 Suite: bdevio tests on: Nvme1n1 00:15:33.793 Test: blockdev write read block ...passed 00:15:33.793 Test: blockdev write zeroes read block ...passed 00:15:33.793 Test: blockdev write zeroes read no split ...passed 00:15:33.793 Test: blockdev write zeroes read split ...passed 00:15:33.793 Test: blockdev write zeroes read split partial ...passed 00:15:33.793 Test: blockdev reset ...[2024-07-15 18:40:50.429917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:33.793 [2024-07-15 18:40:50.429979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18786d0 (9): Bad file descriptor 00:15:33.793 [2024-07-15 18:40:50.446097] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:33.793 passed 00:15:33.793 Test: blockdev write read 8 blocks ...passed 00:15:33.793 Test: blockdev write read size > 128k ...passed 00:15:33.793 Test: blockdev write read invalid size ...passed 00:15:33.793 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.793 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.793 Test: blockdev write read max offset ...passed 00:15:34.051 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.051 Test: blockdev writev readv 8 blocks ...passed 00:15:34.051 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.051 Test: blockdev writev readv block ...passed 00:15:34.051 Test: blockdev writev readv size > 128k ...passed 00:15:34.051 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.051 Test: blockdev comparev and writev ...[2024-07-15 18:40:50.659798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.659827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.659841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.659849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.660124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.660135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.660147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.660155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.660431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.660442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.660454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.660461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.660744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.660756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.660768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.051 [2024-07-15 18:40:50.660775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:34.051 passed 00:15:34.051 Test: blockdev nvme passthru rw ...passed 00:15:34.051 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:40:50.742657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.051 [2024-07-15 18:40:50.742673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.742819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.051 [2024-07-15 18:40:50.742829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.742966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.051 [2024-07-15 18:40:50.742977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:34.051 [2024-07-15 18:40:50.743118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.051 [2024-07-15 18:40:50.743129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:34.051 passed 00:15:34.310 Test: blockdev nvme admin passthru ...passed 00:15:34.310 Test: blockdev copy ...passed 00:15:34.310 00:15:34.310 Run Summary: Type Total Ran Passed Failed Inactive 00:15:34.310 suites 1 1 n/a 0 0 00:15:34.310 tests 23 23 23 0 0 00:15:34.310 asserts 152 152 152 0 n/a 00:15:34.310 00:15:34.310 Elapsed time = 1.148 seconds 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.310 18:40:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.310 rmmod nvme_tcp 00:15:34.310 rmmod nvme_fabrics 00:15:34.310 rmmod nvme_keyring 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1082333 ']' 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1082333 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1082333 ']' 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1082333 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1082333 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1082333' 00:15:34.568 killing process with pid 1082333 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1082333 00:15:34.568 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1082333 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.827 18:40:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.729 18:40:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.729 00:15:36.729 real 0m9.957s 00:15:36.729 user 0m13.032s 00:15:36.729 sys 0m4.470s 00:15:36.729 18:40:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.729 18:40:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.729 ************************************ 00:15:36.729 END TEST nvmf_bdevio 00:15:36.729 ************************************ 00:15:36.729 18:40:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:36.729 18:40:53 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:36.729 18:40:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:36.729 18:40:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.729 18:40:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.729 ************************************ 00:15:36.729 START TEST nvmf_auth_target 00:15:36.729 ************************************ 00:15:36.729 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:36.987 * Looking for test storage... 00:15:36.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.987 18:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:42.251 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:42.251 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:42.251 Found net devices under 0000:86:00.0: cvl_0_0 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:42.251 Found net devices under 0000:86:00.1: cvl_0_1 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:42.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:15:42.251 00:15:42.251 --- 10.0.0.2 ping statistics --- 00:15:42.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.251 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:15:42.251 00:15:42.251 --- 10.0.0.1 ping statistics --- 00:15:42.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.251 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:42.251 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1086110 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1086110 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1086110 ']' 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.252 18:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1086350 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=261f0180ae38355c304e0e3aa54f43e5d24a0aadad8e30e2 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:42.818 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vIi 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 261f0180ae38355c304e0e3aa54f43e5d24a0aadad8e30e2 0 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 261f0180ae38355c304e0e3aa54f43e5d24a0aadad8e30e2 0 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=261f0180ae38355c304e0e3aa54f43e5d24a0aadad8e30e2 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vIi 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vIi 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.vIi 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3bb3aa493aa6262a0bd226065228790669f877361885a8d774a2a3c54d7fe91f 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ReZ 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3bb3aa493aa6262a0bd226065228790669f877361885a8d774a2a3c54d7fe91f 3 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3bb3aa493aa6262a0bd226065228790669f877361885a8d774a2a3c54d7fe91f 3 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3bb3aa493aa6262a0bd226065228790669f877361885a8d774a2a3c54d7fe91f 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ReZ 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ReZ 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ReZ 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=573a7eadf6ab1a7e0c119ea0a147ed5b 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8oD 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 573a7eadf6ab1a7e0c119ea0a147ed5b 1 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 573a7eadf6ab1a7e0c119ea0a147ed5b 1 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=573a7eadf6ab1a7e0c119ea0a147ed5b 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:42.819 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8oD 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8oD 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.8oD 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=90d8152df2e46694f5b16a837e99d2dc1af1f6d74a64d036 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Woz 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 90d8152df2e46694f5b16a837e99d2dc1af1f6d74a64d036 2 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 90d8152df2e46694f5b16a837e99d2dc1af1f6d74a64d036 2 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=90d8152df2e46694f5b16a837e99d2dc1af1f6d74a64d036 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Woz 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Woz 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Woz 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4c605515f7fa63189fa0619a888bfd70fbd3e51daa7b5ddd 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ls0 00:15:43.077 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4c605515f7fa63189fa0619a888bfd70fbd3e51daa7b5ddd 2 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4c605515f7fa63189fa0619a888bfd70fbd3e51daa7b5ddd 2 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4c605515f7fa63189fa0619a888bfd70fbd3e51daa7b5ddd 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ls0 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ls0 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ls0 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2f15990596a39755c07f67b582f2eeec 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Mhr 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2f15990596a39755c07f67b582f2eeec 1 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2f15990596a39755c07f67b582f2eeec 1 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2f15990596a39755c07f67b582f2eeec 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Mhr 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Mhr 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Mhr 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=becf02d9fe4217372d6b490432ebe53b597ebda9d58289e5058c30f9bf48e1e7 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dEi 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key becf02d9fe4217372d6b490432ebe53b597ebda9d58289e5058c30f9bf48e1e7 3 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 becf02d9fe4217372d6b490432ebe53b597ebda9d58289e5058c30f9bf48e1e7 3 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=becf02d9fe4217372d6b490432ebe53b597ebda9d58289e5058c30f9bf48e1e7 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dEi 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dEi 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.dEi 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1086110 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1086110 ']' 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.078 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1086350 /var/tmp/host.sock 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1086350 ']' 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:43.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.337 18:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vIi 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vIi 00:15:43.594 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vIi 00:15:43.850 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ReZ ]] 00:15:43.850 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ReZ 00:15:43.850 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.850 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.850 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.850 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ReZ 00:15:43.850 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ReZ 00:15:43.851 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:43.851 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8oD 00:15:43.851 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.851 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.851 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.851 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8oD 00:15:43.851 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8oD 00:15:44.107 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Woz ]] 00:15:44.107 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Woz 00:15:44.107 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.107 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.107 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.107 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Woz 00:15:44.107 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Woz 00:15:44.364 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:44.364 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ls0 00:15:44.364 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.364 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.364 18:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.364 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ls0 00:15:44.364 18:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ls0 00:15:44.364 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Mhr ]] 00:15:44.364 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Mhr 00:15:44.364 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Mhr 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Mhr 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dEi 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dEi 00:15:44.621 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dEi 00:15:44.880 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:44.880 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:44.880 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.880 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.880 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.880 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.137 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.138 18:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.138 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.138 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.138 00:15:45.138 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.138 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.138 18:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.395 { 00:15:45.395 "cntlid": 1, 00:15:45.395 "qid": 0, 00:15:45.395 "state": "enabled", 00:15:45.395 "thread": "nvmf_tgt_poll_group_000", 00:15:45.395 "listen_address": { 00:15:45.395 "trtype": "TCP", 00:15:45.395 "adrfam": "IPv4", 00:15:45.395 "traddr": "10.0.0.2", 00:15:45.395 "trsvcid": "4420" 00:15:45.395 }, 00:15:45.395 "peer_address": { 00:15:45.395 "trtype": "TCP", 00:15:45.395 "adrfam": "IPv4", 00:15:45.395 "traddr": "10.0.0.1", 00:15:45.395 "trsvcid": "41048" 00:15:45.395 }, 00:15:45.395 "auth": { 00:15:45.395 "state": "completed", 00:15:45.395 "digest": "sha256", 00:15:45.395 "dhgroup": "null" 00:15:45.395 } 00:15:45.395 } 00:15:45.395 ]' 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:45.395 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.652 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.652 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.652 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.652 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.215 18:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.472 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.729 00:15:46.729 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.729 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.729 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.986 { 00:15:46.986 "cntlid": 3, 00:15:46.986 "qid": 0, 00:15:46.986 "state": "enabled", 00:15:46.986 "thread": "nvmf_tgt_poll_group_000", 00:15:46.986 "listen_address": { 00:15:46.986 "trtype": "TCP", 00:15:46.986 "adrfam": "IPv4", 00:15:46.986 "traddr": "10.0.0.2", 00:15:46.986 "trsvcid": "4420" 00:15:46.986 }, 00:15:46.986 "peer_address": { 00:15:46.986 "trtype": "TCP", 00:15:46.986 "adrfam": "IPv4", 00:15:46.986 "traddr": "10.0.0.1", 00:15:46.986 "trsvcid": "41066" 00:15:46.986 }, 00:15:46.986 "auth": { 00:15:46.986 "state": "completed", 00:15:46.986 "digest": "sha256", 00:15:46.986 "dhgroup": "null" 00:15:46.986 } 00:15:46.986 } 00:15:46.986 ]' 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.986 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.243 18:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.805 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.062 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.318 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.318 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.318 { 00:15:48.318 "cntlid": 5, 00:15:48.318 "qid": 0, 00:15:48.318 "state": "enabled", 00:15:48.318 "thread": "nvmf_tgt_poll_group_000", 00:15:48.318 "listen_address": { 00:15:48.318 "trtype": "TCP", 00:15:48.318 "adrfam": "IPv4", 00:15:48.318 "traddr": "10.0.0.2", 00:15:48.318 "trsvcid": "4420" 00:15:48.318 }, 00:15:48.318 "peer_address": { 00:15:48.318 "trtype": "TCP", 00:15:48.318 "adrfam": "IPv4", 00:15:48.318 "traddr": "10.0.0.1", 00:15:48.318 "trsvcid": "41100" 00:15:48.318 }, 00:15:48.318 "auth": { 00:15:48.318 "state": "completed", 00:15:48.319 "digest": "sha256", 00:15:48.319 "dhgroup": "null" 00:15:48.319 } 00:15:48.319 } 00:15:48.319 ]' 00:15:48.319 18:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.319 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.319 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.574 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:48.574 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.575 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.575 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.575 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.575 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:15:49.137 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.394 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.394 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.394 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.394 18:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.394 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.394 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.394 18:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.394 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.665 00:15:49.665 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.665 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.665 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.930 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.930 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.930 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.930 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.930 18:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.930 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.930 { 00:15:49.930 "cntlid": 7, 00:15:49.930 "qid": 0, 00:15:49.930 "state": "enabled", 00:15:49.930 "thread": "nvmf_tgt_poll_group_000", 00:15:49.930 "listen_address": { 00:15:49.930 "trtype": "TCP", 00:15:49.930 "adrfam": "IPv4", 00:15:49.930 "traddr": "10.0.0.2", 00:15:49.930 "trsvcid": "4420" 00:15:49.931 }, 00:15:49.931 "peer_address": { 00:15:49.931 "trtype": "TCP", 00:15:49.931 "adrfam": "IPv4", 00:15:49.931 "traddr": "10.0.0.1", 00:15:49.931 "trsvcid": "41112" 00:15:49.931 }, 00:15:49.931 "auth": { 00:15:49.931 "state": "completed", 00:15:49.931 "digest": "sha256", 00:15:49.931 "dhgroup": "null" 00:15:49.931 } 00:15:49.931 } 00:15:49.931 ]' 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.931 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.187 18:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.752 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.009 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.293 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.293 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.293 { 00:15:51.293 "cntlid": 9, 00:15:51.293 "qid": 0, 00:15:51.293 "state": "enabled", 00:15:51.293 "thread": "nvmf_tgt_poll_group_000", 00:15:51.293 "listen_address": { 00:15:51.293 "trtype": "TCP", 00:15:51.293 "adrfam": "IPv4", 00:15:51.293 "traddr": "10.0.0.2", 00:15:51.293 "trsvcid": "4420" 00:15:51.293 }, 00:15:51.293 "peer_address": { 00:15:51.293 "trtype": "TCP", 00:15:51.293 "adrfam": "IPv4", 00:15:51.293 "traddr": "10.0.0.1", 00:15:51.293 "trsvcid": "39994" 00:15:51.293 }, 00:15:51.293 "auth": { 00:15:51.293 "state": "completed", 00:15:51.293 "digest": "sha256", 00:15:51.293 "dhgroup": "ffdhe2048" 00:15:51.293 } 00:15:51.293 } 00:15:51.293 ]' 00:15:51.556 18:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.556 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.556 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.556 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.556 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.556 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.556 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.556 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.812 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.378 18:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.378 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.636 00:15:52.636 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.636 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.636 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.894 { 00:15:52.894 "cntlid": 11, 00:15:52.894 "qid": 0, 00:15:52.894 "state": "enabled", 00:15:52.894 "thread": "nvmf_tgt_poll_group_000", 00:15:52.894 "listen_address": { 00:15:52.894 "trtype": "TCP", 00:15:52.894 "adrfam": "IPv4", 00:15:52.894 "traddr": "10.0.0.2", 00:15:52.894 "trsvcid": "4420" 00:15:52.894 }, 00:15:52.894 "peer_address": { 00:15:52.894 "trtype": "TCP", 00:15:52.894 "adrfam": "IPv4", 00:15:52.894 "traddr": "10.0.0.1", 00:15:52.894 "trsvcid": "40022" 00:15:52.894 }, 00:15:52.894 "auth": { 00:15:52.894 "state": "completed", 00:15:52.894 "digest": "sha256", 00:15:52.894 "dhgroup": "ffdhe2048" 00:15:52.894 } 00:15:52.894 } 00:15:52.894 ]' 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.894 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.152 18:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.718 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.976 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.235 00:15:54.235 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.235 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.235 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.493 { 00:15:54.493 "cntlid": 13, 00:15:54.493 "qid": 0, 00:15:54.493 "state": "enabled", 00:15:54.493 "thread": "nvmf_tgt_poll_group_000", 00:15:54.493 "listen_address": { 00:15:54.493 "trtype": "TCP", 00:15:54.493 "adrfam": "IPv4", 00:15:54.493 "traddr": "10.0.0.2", 00:15:54.493 "trsvcid": "4420" 00:15:54.493 }, 00:15:54.493 "peer_address": { 00:15:54.493 "trtype": "TCP", 00:15:54.493 "adrfam": "IPv4", 00:15:54.493 "traddr": "10.0.0.1", 00:15:54.493 "trsvcid": "40052" 00:15:54.493 }, 00:15:54.493 "auth": { 00:15:54.493 "state": "completed", 00:15:54.493 "digest": "sha256", 00:15:54.493 "dhgroup": "ffdhe2048" 00:15:54.493 } 00:15:54.493 } 00:15:54.493 ]' 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.493 18:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.493 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.493 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.493 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.493 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.493 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.493 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.751 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.317 18:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.317 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.575 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.575 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.575 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.575 00:15:55.575 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.575 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.575 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.833 { 00:15:55.833 "cntlid": 15, 00:15:55.833 "qid": 0, 00:15:55.833 "state": "enabled", 00:15:55.833 "thread": "nvmf_tgt_poll_group_000", 00:15:55.833 "listen_address": { 00:15:55.833 "trtype": "TCP", 00:15:55.833 "adrfam": "IPv4", 00:15:55.833 "traddr": "10.0.0.2", 00:15:55.833 "trsvcid": "4420" 00:15:55.833 }, 00:15:55.833 "peer_address": { 00:15:55.833 "trtype": "TCP", 00:15:55.833 "adrfam": "IPv4", 00:15:55.833 "traddr": "10.0.0.1", 00:15:55.833 "trsvcid": "40082" 00:15:55.833 }, 00:15:55.833 "auth": { 00:15:55.833 "state": "completed", 00:15:55.833 "digest": "sha256", 00:15:55.833 "dhgroup": "ffdhe2048" 00:15:55.833 } 00:15:55.833 } 00:15:55.833 ]' 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.833 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.091 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.091 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.091 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.091 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.091 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.091 18:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.656 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.913 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.171 00:15:57.171 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.171 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.171 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.429 { 00:15:57.429 "cntlid": 17, 00:15:57.429 "qid": 0, 00:15:57.429 "state": "enabled", 00:15:57.429 "thread": "nvmf_tgt_poll_group_000", 00:15:57.429 "listen_address": { 00:15:57.429 "trtype": "TCP", 00:15:57.429 "adrfam": "IPv4", 00:15:57.429 "traddr": "10.0.0.2", 00:15:57.429 "trsvcid": "4420" 00:15:57.429 }, 00:15:57.429 "peer_address": { 00:15:57.429 "trtype": "TCP", 00:15:57.429 "adrfam": "IPv4", 00:15:57.429 "traddr": "10.0.0.1", 00:15:57.429 "trsvcid": "40108" 00:15:57.429 }, 00:15:57.429 "auth": { 00:15:57.429 "state": "completed", 00:15:57.429 "digest": "sha256", 00:15:57.429 "dhgroup": "ffdhe3072" 00:15:57.429 } 00:15:57.429 } 00:15:57.429 ]' 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.429 18:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.429 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.429 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.429 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.429 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.429 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.686 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.252 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.510 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:58.510 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.510 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.510 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:58.510 18:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:58.510 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.510 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.510 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.510 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.510 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.510 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.510 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.768 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.768 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.768 { 00:15:58.768 "cntlid": 19, 00:15:58.768 "qid": 0, 00:15:58.768 "state": "enabled", 00:15:58.768 "thread": "nvmf_tgt_poll_group_000", 00:15:58.768 "listen_address": { 00:15:58.768 "trtype": "TCP", 00:15:58.768 "adrfam": "IPv4", 00:15:58.768 "traddr": "10.0.0.2", 00:15:58.768 "trsvcid": "4420" 00:15:58.768 }, 00:15:58.768 "peer_address": { 00:15:58.768 "trtype": "TCP", 00:15:58.768 "adrfam": "IPv4", 00:15:58.768 "traddr": "10.0.0.1", 00:15:58.768 "trsvcid": "40124" 00:15:58.768 }, 00:15:58.768 "auth": { 00:15:58.768 "state": "completed", 00:15:58.768 "digest": "sha256", 00:15:58.768 "dhgroup": "ffdhe3072" 00:15:58.768 } 00:15:58.768 } 00:15:58.768 ]' 00:15:58.769 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.027 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.027 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.027 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.027 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.027 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.027 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.027 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.285 18:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.850 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.107 00:16:00.107 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.107 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.107 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.365 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.365 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.365 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.365 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.365 18:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.365 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.365 { 00:16:00.365 "cntlid": 21, 00:16:00.365 "qid": 0, 00:16:00.365 "state": "enabled", 00:16:00.365 "thread": "nvmf_tgt_poll_group_000", 00:16:00.365 "listen_address": { 00:16:00.365 "trtype": "TCP", 00:16:00.365 "adrfam": "IPv4", 00:16:00.365 "traddr": "10.0.0.2", 00:16:00.365 "trsvcid": "4420" 00:16:00.365 }, 00:16:00.365 "peer_address": { 00:16:00.365 "trtype": "TCP", 00:16:00.365 "adrfam": "IPv4", 00:16:00.365 "traddr": "10.0.0.1", 00:16:00.365 "trsvcid": "40148" 00:16:00.365 }, 00:16:00.365 "auth": { 00:16:00.365 "state": "completed", 00:16:00.365 "digest": "sha256", 00:16:00.365 "dhgroup": "ffdhe3072" 00:16:00.365 } 00:16:00.365 } 00:16:00.365 ]' 00:16:00.365 18:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.365 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.365 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.365 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.365 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.624 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.624 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.624 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.624 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.190 18:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.448 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.706 00:16:01.706 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.706 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.706 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.964 { 00:16:01.964 "cntlid": 23, 00:16:01.964 "qid": 0, 00:16:01.964 "state": "enabled", 00:16:01.964 "thread": "nvmf_tgt_poll_group_000", 00:16:01.964 "listen_address": { 00:16:01.964 "trtype": "TCP", 00:16:01.964 "adrfam": "IPv4", 00:16:01.964 "traddr": "10.0.0.2", 00:16:01.964 "trsvcid": "4420" 00:16:01.964 }, 00:16:01.964 "peer_address": { 00:16:01.964 "trtype": "TCP", 00:16:01.964 "adrfam": "IPv4", 00:16:01.964 "traddr": "10.0.0.1", 00:16:01.964 "trsvcid": "50786" 00:16:01.964 }, 00:16:01.964 "auth": { 00:16:01.964 "state": "completed", 00:16:01.964 "digest": "sha256", 00:16:01.964 "dhgroup": "ffdhe3072" 00:16:01.964 } 00:16:01.964 } 00:16:01.964 ]' 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.964 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.222 18:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.789 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.047 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.306 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.306 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.306 { 00:16:03.306 "cntlid": 25, 00:16:03.306 "qid": 0, 00:16:03.306 "state": "enabled", 00:16:03.306 "thread": "nvmf_tgt_poll_group_000", 00:16:03.306 "listen_address": { 00:16:03.306 "trtype": "TCP", 00:16:03.306 "adrfam": "IPv4", 00:16:03.306 "traddr": "10.0.0.2", 00:16:03.306 "trsvcid": "4420" 00:16:03.306 }, 00:16:03.307 "peer_address": { 00:16:03.307 "trtype": "TCP", 00:16:03.307 "adrfam": "IPv4", 00:16:03.307 "traddr": "10.0.0.1", 00:16:03.307 "trsvcid": "50812" 00:16:03.307 }, 00:16:03.307 "auth": { 00:16:03.307 "state": "completed", 00:16:03.307 "digest": "sha256", 00:16:03.307 "dhgroup": "ffdhe4096" 00:16:03.307 } 00:16:03.307 } 00:16:03.307 ]' 00:16:03.307 18:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.307 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.566 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.566 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.566 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.566 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.566 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.566 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.566 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:04.135 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.135 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.135 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.135 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.394 18:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.394 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.394 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:04.394 18:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.394 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.653 00:16:04.653 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.653 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.653 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.912 { 00:16:04.912 "cntlid": 27, 00:16:04.912 "qid": 0, 00:16:04.912 "state": "enabled", 00:16:04.912 "thread": "nvmf_tgt_poll_group_000", 00:16:04.912 "listen_address": { 00:16:04.912 "trtype": "TCP", 00:16:04.912 "adrfam": "IPv4", 00:16:04.912 "traddr": "10.0.0.2", 00:16:04.912 "trsvcid": "4420" 00:16:04.912 }, 00:16:04.912 "peer_address": { 00:16:04.912 "trtype": "TCP", 00:16:04.912 "adrfam": "IPv4", 00:16:04.912 "traddr": "10.0.0.1", 00:16:04.912 "trsvcid": "50842" 00:16:04.912 }, 00:16:04.912 "auth": { 00:16:04.912 "state": "completed", 00:16:04.912 "digest": "sha256", 00:16:04.912 "dhgroup": "ffdhe4096" 00:16:04.912 } 00:16:04.912 } 00:16:04.912 ]' 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.912 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.171 18:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.738 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.997 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.256 00:16:06.256 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.256 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.256 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.515 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.515 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.515 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.515 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.515 18:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.515 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.515 { 00:16:06.515 "cntlid": 29, 00:16:06.515 "qid": 0, 00:16:06.515 "state": "enabled", 00:16:06.515 "thread": "nvmf_tgt_poll_group_000", 00:16:06.515 "listen_address": { 00:16:06.515 "trtype": "TCP", 00:16:06.515 "adrfam": "IPv4", 00:16:06.515 "traddr": "10.0.0.2", 00:16:06.515 "trsvcid": "4420" 00:16:06.515 }, 00:16:06.515 "peer_address": { 00:16:06.515 "trtype": "TCP", 00:16:06.515 "adrfam": "IPv4", 00:16:06.515 "traddr": "10.0.0.1", 00:16:06.515 "trsvcid": "50860" 00:16:06.515 }, 00:16:06.515 "auth": { 00:16:06.515 "state": "completed", 00:16:06.515 "digest": "sha256", 00:16:06.515 "dhgroup": "ffdhe4096" 00:16:06.515 } 00:16:06.515 } 00:16:06.515 ]' 00:16:06.515 18:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.515 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.515 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.515 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.515 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.515 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.515 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.515 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.776 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.344 18:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.344 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.603 00:16:07.603 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.603 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.603 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.864 { 00:16:07.864 "cntlid": 31, 00:16:07.864 "qid": 0, 00:16:07.864 "state": "enabled", 00:16:07.864 "thread": "nvmf_tgt_poll_group_000", 00:16:07.864 "listen_address": { 00:16:07.864 "trtype": "TCP", 00:16:07.864 "adrfam": "IPv4", 00:16:07.864 "traddr": "10.0.0.2", 00:16:07.864 "trsvcid": "4420" 00:16:07.864 }, 00:16:07.864 "peer_address": { 00:16:07.864 "trtype": "TCP", 00:16:07.864 "adrfam": "IPv4", 00:16:07.864 "traddr": "10.0.0.1", 00:16:07.864 "trsvcid": "50884" 00:16:07.864 }, 00:16:07.864 "auth": { 00:16:07.864 "state": "completed", 00:16:07.864 "digest": "sha256", 00:16:07.864 "dhgroup": "ffdhe4096" 00:16:07.864 } 00:16:07.864 } 00:16:07.864 ]' 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.864 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.191 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.191 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.191 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.191 18:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.759 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.018 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.277 00:16:09.277 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.277 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.277 18:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.535 { 00:16:09.535 "cntlid": 33, 00:16:09.535 "qid": 0, 00:16:09.535 "state": "enabled", 00:16:09.535 "thread": "nvmf_tgt_poll_group_000", 00:16:09.535 "listen_address": { 00:16:09.535 "trtype": "TCP", 00:16:09.535 "adrfam": "IPv4", 00:16:09.535 "traddr": "10.0.0.2", 00:16:09.535 "trsvcid": "4420" 00:16:09.535 }, 00:16:09.535 "peer_address": { 00:16:09.535 "trtype": "TCP", 00:16:09.535 "adrfam": "IPv4", 00:16:09.535 "traddr": "10.0.0.1", 00:16:09.535 "trsvcid": "50916" 00:16:09.535 }, 00:16:09.535 "auth": { 00:16:09.535 "state": "completed", 00:16:09.535 "digest": "sha256", 00:16:09.535 "dhgroup": "ffdhe6144" 00:16:09.535 } 00:16:09.535 } 00:16:09.535 ]' 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.535 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.794 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.360 18:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.619 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.877 00:16:10.877 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.877 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.877 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.136 { 00:16:11.136 "cntlid": 35, 00:16:11.136 "qid": 0, 00:16:11.136 "state": "enabled", 00:16:11.136 "thread": "nvmf_tgt_poll_group_000", 00:16:11.136 "listen_address": { 00:16:11.136 "trtype": "TCP", 00:16:11.136 "adrfam": "IPv4", 00:16:11.136 "traddr": "10.0.0.2", 00:16:11.136 "trsvcid": "4420" 00:16:11.136 }, 00:16:11.136 "peer_address": { 00:16:11.136 "trtype": "TCP", 00:16:11.136 "adrfam": "IPv4", 00:16:11.136 "traddr": "10.0.0.1", 00:16:11.136 "trsvcid": "40098" 00:16:11.136 }, 00:16:11.136 "auth": { 00:16:11.136 "state": "completed", 00:16:11.136 "digest": "sha256", 00:16:11.136 "dhgroup": "ffdhe6144" 00:16:11.136 } 00:16:11.136 } 00:16:11.136 ]' 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.136 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.394 18:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.959 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.217 18:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.475 00:16:12.475 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.475 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.475 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.734 { 00:16:12.734 "cntlid": 37, 00:16:12.734 "qid": 0, 00:16:12.734 "state": "enabled", 00:16:12.734 "thread": "nvmf_tgt_poll_group_000", 00:16:12.734 "listen_address": { 00:16:12.734 "trtype": "TCP", 00:16:12.734 "adrfam": "IPv4", 00:16:12.734 "traddr": "10.0.0.2", 00:16:12.734 "trsvcid": "4420" 00:16:12.734 }, 00:16:12.734 "peer_address": { 00:16:12.734 "trtype": "TCP", 00:16:12.734 "adrfam": "IPv4", 00:16:12.734 "traddr": "10.0.0.1", 00:16:12.734 "trsvcid": "40114" 00:16:12.734 }, 00:16:12.734 "auth": { 00:16:12.734 "state": "completed", 00:16:12.734 "digest": "sha256", 00:16:12.734 "dhgroup": "ffdhe6144" 00:16:12.734 } 00:16:12.734 } 00:16:12.734 ]' 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.734 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.993 18:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.559 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.818 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.076 00:16:14.076 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.076 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.076 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.334 { 00:16:14.334 "cntlid": 39, 00:16:14.334 "qid": 0, 00:16:14.334 "state": "enabled", 00:16:14.334 "thread": "nvmf_tgt_poll_group_000", 00:16:14.334 "listen_address": { 00:16:14.334 "trtype": "TCP", 00:16:14.334 "adrfam": "IPv4", 00:16:14.334 "traddr": "10.0.0.2", 00:16:14.334 "trsvcid": "4420" 00:16:14.334 }, 00:16:14.334 "peer_address": { 00:16:14.334 "trtype": "TCP", 00:16:14.334 "adrfam": "IPv4", 00:16:14.334 "traddr": "10.0.0.1", 00:16:14.334 "trsvcid": "40136" 00:16:14.334 }, 00:16:14.334 "auth": { 00:16:14.334 "state": "completed", 00:16:14.334 "digest": "sha256", 00:16:14.334 "dhgroup": "ffdhe6144" 00:16:14.334 } 00:16:14.334 } 00:16:14.334 ]' 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.334 18:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.592 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.157 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.415 18:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.673 00:16:15.673 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.673 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.673 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.931 { 00:16:15.931 "cntlid": 41, 00:16:15.931 "qid": 0, 00:16:15.931 "state": "enabled", 00:16:15.931 "thread": "nvmf_tgt_poll_group_000", 00:16:15.931 "listen_address": { 00:16:15.931 "trtype": "TCP", 00:16:15.931 "adrfam": "IPv4", 00:16:15.931 "traddr": "10.0.0.2", 00:16:15.931 "trsvcid": "4420" 00:16:15.931 }, 00:16:15.931 "peer_address": { 00:16:15.931 "trtype": "TCP", 00:16:15.931 "adrfam": "IPv4", 00:16:15.931 "traddr": "10.0.0.1", 00:16:15.931 "trsvcid": "40182" 00:16:15.931 }, 00:16:15.931 "auth": { 00:16:15.931 "state": "completed", 00:16:15.931 "digest": "sha256", 00:16:15.931 "dhgroup": "ffdhe8192" 00:16:15.931 } 00:16:15.931 } 00:16:15.931 ]' 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.931 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.189 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.189 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.189 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.189 18:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:16.754 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.754 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.754 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.754 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.754 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.754 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.755 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.755 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.012 18:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.576 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.576 { 00:16:17.576 "cntlid": 43, 00:16:17.576 "qid": 0, 00:16:17.576 "state": "enabled", 00:16:17.576 "thread": "nvmf_tgt_poll_group_000", 00:16:17.576 "listen_address": { 00:16:17.576 "trtype": "TCP", 00:16:17.576 "adrfam": "IPv4", 00:16:17.576 "traddr": "10.0.0.2", 00:16:17.576 "trsvcid": "4420" 00:16:17.576 }, 00:16:17.576 "peer_address": { 00:16:17.576 "trtype": "TCP", 00:16:17.576 "adrfam": "IPv4", 00:16:17.576 "traddr": "10.0.0.1", 00:16:17.576 "trsvcid": "40222" 00:16:17.576 }, 00:16:17.576 "auth": { 00:16:17.576 "state": "completed", 00:16:17.576 "digest": "sha256", 00:16:17.576 "dhgroup": "ffdhe8192" 00:16:17.576 } 00:16:17.576 } 00:16:17.576 ]' 00:16:17.576 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.833 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.833 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.833 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.833 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.833 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.833 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.833 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.090 18:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.654 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.655 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.655 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.655 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.655 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.655 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.655 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.217 00:16:19.217 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.217 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.217 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.474 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.474 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.474 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.474 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.474 18:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.474 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.474 { 00:16:19.474 "cntlid": 45, 00:16:19.474 "qid": 0, 00:16:19.474 "state": "enabled", 00:16:19.474 "thread": "nvmf_tgt_poll_group_000", 00:16:19.474 "listen_address": { 00:16:19.474 "trtype": "TCP", 00:16:19.474 "adrfam": "IPv4", 00:16:19.474 "traddr": "10.0.0.2", 00:16:19.474 "trsvcid": "4420" 00:16:19.474 }, 00:16:19.474 "peer_address": { 00:16:19.474 "trtype": "TCP", 00:16:19.474 "adrfam": "IPv4", 00:16:19.474 "traddr": "10.0.0.1", 00:16:19.474 "trsvcid": "40252" 00:16:19.474 }, 00:16:19.474 "auth": { 00:16:19.474 "state": "completed", 00:16:19.474 "digest": "sha256", 00:16:19.474 "dhgroup": "ffdhe8192" 00:16:19.474 } 00:16:19.474 } 00:16:19.474 ]' 00:16:19.474 18:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.474 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.474 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.474 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.474 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.474 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.474 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.474 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.730 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.296 18:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.553 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.811 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.069 { 00:16:21.069 "cntlid": 47, 00:16:21.069 "qid": 0, 00:16:21.069 "state": "enabled", 00:16:21.069 "thread": "nvmf_tgt_poll_group_000", 00:16:21.069 "listen_address": { 00:16:21.069 "trtype": "TCP", 00:16:21.069 "adrfam": "IPv4", 00:16:21.069 "traddr": "10.0.0.2", 00:16:21.069 "trsvcid": "4420" 00:16:21.069 }, 00:16:21.069 "peer_address": { 00:16:21.069 "trtype": "TCP", 00:16:21.069 "adrfam": "IPv4", 00:16:21.069 "traddr": "10.0.0.1", 00:16:21.069 "trsvcid": "60780" 00:16:21.069 }, 00:16:21.069 "auth": { 00:16:21.069 "state": "completed", 00:16:21.069 "digest": "sha256", 00:16:21.069 "dhgroup": "ffdhe8192" 00:16:21.069 } 00:16:21.069 } 00:16:21.069 ]' 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.069 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.327 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.327 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.327 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.327 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.327 18:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.327 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.893 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.894 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.894 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.152 18:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.410 00:16:22.410 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.410 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.410 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.669 { 00:16:22.669 "cntlid": 49, 00:16:22.669 "qid": 0, 00:16:22.669 "state": "enabled", 00:16:22.669 "thread": "nvmf_tgt_poll_group_000", 00:16:22.669 "listen_address": { 00:16:22.669 "trtype": "TCP", 00:16:22.669 "adrfam": "IPv4", 00:16:22.669 "traddr": "10.0.0.2", 00:16:22.669 "trsvcid": "4420" 00:16:22.669 }, 00:16:22.669 "peer_address": { 00:16:22.669 "trtype": "TCP", 00:16:22.669 "adrfam": "IPv4", 00:16:22.669 "traddr": "10.0.0.1", 00:16:22.669 "trsvcid": "60806" 00:16:22.669 }, 00:16:22.669 "auth": { 00:16:22.669 "state": "completed", 00:16:22.669 "digest": "sha384", 00:16:22.669 "dhgroup": "null" 00:16:22.669 } 00:16:22.669 } 00:16:22.669 ]' 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.669 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.927 18:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.494 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.752 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.009 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.009 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.009 { 00:16:24.009 "cntlid": 51, 00:16:24.009 "qid": 0, 00:16:24.009 "state": "enabled", 00:16:24.009 "thread": "nvmf_tgt_poll_group_000", 00:16:24.009 "listen_address": { 00:16:24.009 "trtype": "TCP", 00:16:24.009 "adrfam": "IPv4", 00:16:24.009 "traddr": "10.0.0.2", 00:16:24.009 "trsvcid": "4420" 00:16:24.009 }, 00:16:24.009 "peer_address": { 00:16:24.009 "trtype": "TCP", 00:16:24.009 "adrfam": "IPv4", 00:16:24.009 "traddr": "10.0.0.1", 00:16:24.009 "trsvcid": "60838" 00:16:24.009 }, 00:16:24.009 "auth": { 00:16:24.009 "state": "completed", 00:16:24.009 "digest": "sha384", 00:16:24.009 "dhgroup": "null" 00:16:24.009 } 00:16:24.010 } 00:16:24.010 ]' 00:16:24.010 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.266 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.266 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.266 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:24.266 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.266 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.266 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.266 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.524 18:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:24.825 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.083 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.341 00:16:25.341 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.341 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.341 18:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.599 { 00:16:25.599 "cntlid": 53, 00:16:25.599 "qid": 0, 00:16:25.599 "state": "enabled", 00:16:25.599 "thread": "nvmf_tgt_poll_group_000", 00:16:25.599 "listen_address": { 00:16:25.599 "trtype": "TCP", 00:16:25.599 "adrfam": "IPv4", 00:16:25.599 "traddr": "10.0.0.2", 00:16:25.599 "trsvcid": "4420" 00:16:25.599 }, 00:16:25.599 "peer_address": { 00:16:25.599 "trtype": "TCP", 00:16:25.599 "adrfam": "IPv4", 00:16:25.599 "traddr": "10.0.0.1", 00:16:25.599 "trsvcid": "60882" 00:16:25.599 }, 00:16:25.599 "auth": { 00:16:25.599 "state": "completed", 00:16:25.599 "digest": "sha384", 00:16:25.599 "dhgroup": "null" 00:16:25.599 } 00:16:25.599 } 00:16:25.599 ]' 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.599 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.856 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:26.422 18:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.422 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.422 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.422 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.422 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.422 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.422 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.422 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.678 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.935 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.935 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.935 { 00:16:26.935 "cntlid": 55, 00:16:26.935 "qid": 0, 00:16:26.935 "state": "enabled", 00:16:26.935 "thread": "nvmf_tgt_poll_group_000", 00:16:26.935 "listen_address": { 00:16:26.935 "trtype": "TCP", 00:16:26.935 "adrfam": "IPv4", 00:16:26.936 "traddr": "10.0.0.2", 00:16:26.936 "trsvcid": "4420" 00:16:26.936 }, 00:16:26.936 "peer_address": { 00:16:26.936 "trtype": "TCP", 00:16:26.936 "adrfam": "IPv4", 00:16:26.936 "traddr": "10.0.0.1", 00:16:26.936 "trsvcid": "60904" 00:16:26.936 }, 00:16:26.936 "auth": { 00:16:26.936 "state": "completed", 00:16:26.936 "digest": "sha384", 00:16:26.936 "dhgroup": "null" 00:16:26.936 } 00:16:26.936 } 00:16:26.936 ]' 00:16:26.936 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.193 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.193 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.193 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.193 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.193 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.193 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.193 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.463 18:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.029 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.287 00:16:28.287 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.287 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.287 18:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.545 { 00:16:28.545 "cntlid": 57, 00:16:28.545 "qid": 0, 00:16:28.545 "state": "enabled", 00:16:28.545 "thread": "nvmf_tgt_poll_group_000", 00:16:28.545 "listen_address": { 00:16:28.545 "trtype": "TCP", 00:16:28.545 "adrfam": "IPv4", 00:16:28.545 "traddr": "10.0.0.2", 00:16:28.545 "trsvcid": "4420" 00:16:28.545 }, 00:16:28.545 "peer_address": { 00:16:28.545 "trtype": "TCP", 00:16:28.545 "adrfam": "IPv4", 00:16:28.545 "traddr": "10.0.0.1", 00:16:28.545 "trsvcid": "60916" 00:16:28.545 }, 00:16:28.545 "auth": { 00:16:28.545 "state": "completed", 00:16:28.545 "digest": "sha384", 00:16:28.545 "dhgroup": "ffdhe2048" 00:16:28.545 } 00:16:28.545 } 00:16:28.545 ]' 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.545 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.803 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.368 18:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.626 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.884 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.884 { 00:16:29.884 "cntlid": 59, 00:16:29.884 "qid": 0, 00:16:29.884 "state": "enabled", 00:16:29.884 "thread": "nvmf_tgt_poll_group_000", 00:16:29.884 "listen_address": { 00:16:29.884 "trtype": "TCP", 00:16:29.884 "adrfam": "IPv4", 00:16:29.884 "traddr": "10.0.0.2", 00:16:29.884 "trsvcid": "4420" 00:16:29.884 }, 00:16:29.884 "peer_address": { 00:16:29.884 "trtype": "TCP", 00:16:29.884 "adrfam": "IPv4", 00:16:29.884 "traddr": "10.0.0.1", 00:16:29.884 "trsvcid": "60942" 00:16:29.884 }, 00:16:29.884 "auth": { 00:16:29.884 "state": "completed", 00:16:29.884 "digest": "sha384", 00:16:29.884 "dhgroup": "ffdhe2048" 00:16:29.884 } 00:16:29.884 } 00:16:29.884 ]' 00:16:29.884 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.142 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.142 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.142 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.142 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.142 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.142 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.142 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.399 18:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.966 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.224 00:16:31.224 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.224 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.224 18:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.483 { 00:16:31.483 "cntlid": 61, 00:16:31.483 "qid": 0, 00:16:31.483 "state": "enabled", 00:16:31.483 "thread": "nvmf_tgt_poll_group_000", 00:16:31.483 "listen_address": { 00:16:31.483 "trtype": "TCP", 00:16:31.483 "adrfam": "IPv4", 00:16:31.483 "traddr": "10.0.0.2", 00:16:31.483 "trsvcid": "4420" 00:16:31.483 }, 00:16:31.483 "peer_address": { 00:16:31.483 "trtype": "TCP", 00:16:31.483 "adrfam": "IPv4", 00:16:31.483 "traddr": "10.0.0.1", 00:16:31.483 "trsvcid": "44690" 00:16:31.483 }, 00:16:31.483 "auth": { 00:16:31.483 "state": "completed", 00:16:31.483 "digest": "sha384", 00:16:31.483 "dhgroup": "ffdhe2048" 00:16:31.483 } 00:16:31.483 } 00:16:31.483 ]' 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.483 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.742 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.742 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.742 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.742 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.308 18:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.566 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.824 00:16:32.824 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.824 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.824 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.082 { 00:16:33.082 "cntlid": 63, 00:16:33.082 "qid": 0, 00:16:33.082 "state": "enabled", 00:16:33.082 "thread": "nvmf_tgt_poll_group_000", 00:16:33.082 "listen_address": { 00:16:33.082 "trtype": "TCP", 00:16:33.082 "adrfam": "IPv4", 00:16:33.082 "traddr": "10.0.0.2", 00:16:33.082 "trsvcid": "4420" 00:16:33.082 }, 00:16:33.082 "peer_address": { 00:16:33.082 "trtype": "TCP", 00:16:33.082 "adrfam": "IPv4", 00:16:33.082 "traddr": "10.0.0.1", 00:16:33.082 "trsvcid": "44710" 00:16:33.082 }, 00:16:33.082 "auth": { 00:16:33.082 "state": "completed", 00:16:33.082 "digest": "sha384", 00:16:33.082 "dhgroup": "ffdhe2048" 00:16:33.082 } 00:16:33.082 } 00:16:33.082 ]' 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.082 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.341 18:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.908 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.166 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.166 00:16:34.424 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.424 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.424 18:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.424 { 00:16:34.424 "cntlid": 65, 00:16:34.424 "qid": 0, 00:16:34.424 "state": "enabled", 00:16:34.424 "thread": "nvmf_tgt_poll_group_000", 00:16:34.424 "listen_address": { 00:16:34.424 "trtype": "TCP", 00:16:34.424 "adrfam": "IPv4", 00:16:34.424 "traddr": "10.0.0.2", 00:16:34.424 "trsvcid": "4420" 00:16:34.424 }, 00:16:34.424 "peer_address": { 00:16:34.424 "trtype": "TCP", 00:16:34.424 "adrfam": "IPv4", 00:16:34.424 "traddr": "10.0.0.1", 00:16:34.424 "trsvcid": "44738" 00:16:34.424 }, 00:16:34.424 "auth": { 00:16:34.424 "state": "completed", 00:16:34.424 "digest": "sha384", 00:16:34.424 "dhgroup": "ffdhe3072" 00:16:34.424 } 00:16:34.424 } 00:16:34.424 ]' 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.424 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.682 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.682 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.682 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.682 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.682 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.682 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:35.247 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.247 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.247 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.247 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.247 18:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.248 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.248 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.248 18:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.506 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.764 00:16:35.764 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.764 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.764 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.022 { 00:16:36.022 "cntlid": 67, 00:16:36.022 "qid": 0, 00:16:36.022 "state": "enabled", 00:16:36.022 "thread": "nvmf_tgt_poll_group_000", 00:16:36.022 "listen_address": { 00:16:36.022 "trtype": "TCP", 00:16:36.022 "adrfam": "IPv4", 00:16:36.022 "traddr": "10.0.0.2", 00:16:36.022 "trsvcid": "4420" 00:16:36.022 }, 00:16:36.022 "peer_address": { 00:16:36.022 "trtype": "TCP", 00:16:36.022 "adrfam": "IPv4", 00:16:36.022 "traddr": "10.0.0.1", 00:16:36.022 "trsvcid": "44776" 00:16:36.022 }, 00:16:36.022 "auth": { 00:16:36.022 "state": "completed", 00:16:36.022 "digest": "sha384", 00:16:36.022 "dhgroup": "ffdhe3072" 00:16:36.022 } 00:16:36.022 } 00:16:36.022 ]' 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.022 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.280 18:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:36.845 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.845 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.845 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.846 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.846 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.846 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.846 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:36.846 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.103 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.361 00:16:37.361 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.361 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.361 18:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.361 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.619 { 00:16:37.619 "cntlid": 69, 00:16:37.619 "qid": 0, 00:16:37.619 "state": "enabled", 00:16:37.619 "thread": "nvmf_tgt_poll_group_000", 00:16:37.619 "listen_address": { 00:16:37.619 "trtype": "TCP", 00:16:37.619 "adrfam": "IPv4", 00:16:37.619 "traddr": "10.0.0.2", 00:16:37.619 "trsvcid": "4420" 00:16:37.619 }, 00:16:37.619 "peer_address": { 00:16:37.619 "trtype": "TCP", 00:16:37.619 "adrfam": "IPv4", 00:16:37.619 "traddr": "10.0.0.1", 00:16:37.619 "trsvcid": "44800" 00:16:37.619 }, 00:16:37.619 "auth": { 00:16:37.619 "state": "completed", 00:16:37.619 "digest": "sha384", 00:16:37.619 "dhgroup": "ffdhe3072" 00:16:37.619 } 00:16:37.619 } 00:16:37.619 ]' 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.619 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.876 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.441 18:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.698 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.956 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.956 { 00:16:38.956 "cntlid": 71, 00:16:38.956 "qid": 0, 00:16:38.956 "state": "enabled", 00:16:38.956 "thread": "nvmf_tgt_poll_group_000", 00:16:38.956 "listen_address": { 00:16:38.956 "trtype": "TCP", 00:16:38.956 "adrfam": "IPv4", 00:16:38.956 "traddr": "10.0.0.2", 00:16:38.956 "trsvcid": "4420" 00:16:38.956 }, 00:16:38.956 "peer_address": { 00:16:38.956 "trtype": "TCP", 00:16:38.956 "adrfam": "IPv4", 00:16:38.956 "traddr": "10.0.0.1", 00:16:38.956 "trsvcid": "44832" 00:16:38.956 }, 00:16:38.956 "auth": { 00:16:38.956 "state": "completed", 00:16:38.956 "digest": "sha384", 00:16:38.956 "dhgroup": "ffdhe3072" 00:16:38.956 } 00:16:38.956 } 00:16:38.956 ]' 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.956 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.214 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.214 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.214 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.214 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.214 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.214 18:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.779 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.037 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.295 00:16:40.295 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.295 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.295 18:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.553 { 00:16:40.553 "cntlid": 73, 00:16:40.553 "qid": 0, 00:16:40.553 "state": "enabled", 00:16:40.553 "thread": "nvmf_tgt_poll_group_000", 00:16:40.553 "listen_address": { 00:16:40.553 "trtype": "TCP", 00:16:40.553 "adrfam": "IPv4", 00:16:40.553 "traddr": "10.0.0.2", 00:16:40.553 "trsvcid": "4420" 00:16:40.553 }, 00:16:40.553 "peer_address": { 00:16:40.553 "trtype": "TCP", 00:16:40.553 "adrfam": "IPv4", 00:16:40.553 "traddr": "10.0.0.1", 00:16:40.553 "trsvcid": "43718" 00:16:40.553 }, 00:16:40.553 "auth": { 00:16:40.553 "state": "completed", 00:16:40.553 "digest": "sha384", 00:16:40.553 "dhgroup": "ffdhe4096" 00:16:40.553 } 00:16:40.553 } 00:16:40.553 ]' 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.553 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.811 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:41.447 18:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.447 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.447 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.447 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.447 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.447 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.447 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.447 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.705 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.963 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.963 { 00:16:41.963 "cntlid": 75, 00:16:41.963 "qid": 0, 00:16:41.963 "state": "enabled", 00:16:41.963 "thread": "nvmf_tgt_poll_group_000", 00:16:41.963 "listen_address": { 00:16:41.963 "trtype": "TCP", 00:16:41.963 "adrfam": "IPv4", 00:16:41.963 "traddr": "10.0.0.2", 00:16:41.963 "trsvcid": "4420" 00:16:41.963 }, 00:16:41.963 "peer_address": { 00:16:41.963 "trtype": "TCP", 00:16:41.963 "adrfam": "IPv4", 00:16:41.963 "traddr": "10.0.0.1", 00:16:41.963 "trsvcid": "43750" 00:16:41.963 }, 00:16:41.963 "auth": { 00:16:41.963 "state": "completed", 00:16:41.963 "digest": "sha384", 00:16:41.963 "dhgroup": "ffdhe4096" 00:16:41.963 } 00:16:41.963 } 00:16:41.963 ]' 00:16:41.963 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.221 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.221 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.221 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.221 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.221 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.221 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.221 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.478 18:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.043 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.301 00:16:43.301 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.301 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.301 18:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.559 { 00:16:43.559 "cntlid": 77, 00:16:43.559 "qid": 0, 00:16:43.559 "state": "enabled", 00:16:43.559 "thread": "nvmf_tgt_poll_group_000", 00:16:43.559 "listen_address": { 00:16:43.559 "trtype": "TCP", 00:16:43.559 "adrfam": "IPv4", 00:16:43.559 "traddr": "10.0.0.2", 00:16:43.559 "trsvcid": "4420" 00:16:43.559 }, 00:16:43.559 "peer_address": { 00:16:43.559 "trtype": "TCP", 00:16:43.559 "adrfam": "IPv4", 00:16:43.559 "traddr": "10.0.0.1", 00:16:43.559 "trsvcid": "43790" 00:16:43.559 }, 00:16:43.559 "auth": { 00:16:43.559 "state": "completed", 00:16:43.559 "digest": "sha384", 00:16:43.559 "dhgroup": "ffdhe4096" 00:16:43.559 } 00:16:43.559 } 00:16:43.559 ]' 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.559 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.817 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.817 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.817 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.817 18:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:44.384 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.384 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.384 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.384 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.642 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.900 00:16:44.900 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.900 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.900 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.162 { 00:16:45.162 "cntlid": 79, 00:16:45.162 "qid": 0, 00:16:45.162 "state": "enabled", 00:16:45.162 "thread": "nvmf_tgt_poll_group_000", 00:16:45.162 "listen_address": { 00:16:45.162 "trtype": "TCP", 00:16:45.162 "adrfam": "IPv4", 00:16:45.162 "traddr": "10.0.0.2", 00:16:45.162 "trsvcid": "4420" 00:16:45.162 }, 00:16:45.162 "peer_address": { 00:16:45.162 "trtype": "TCP", 00:16:45.162 "adrfam": "IPv4", 00:16:45.162 "traddr": "10.0.0.1", 00:16:45.162 "trsvcid": "43816" 00:16:45.162 }, 00:16:45.162 "auth": { 00:16:45.162 "state": "completed", 00:16:45.162 "digest": "sha384", 00:16:45.162 "dhgroup": "ffdhe4096" 00:16:45.162 } 00:16:45.162 } 00:16:45.162 ]' 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.162 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.422 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.422 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.422 18:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.422 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.990 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.249 18:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.508 00:16:46.508 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.508 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.508 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.768 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.769 { 00:16:46.769 "cntlid": 81, 00:16:46.769 "qid": 0, 00:16:46.769 "state": "enabled", 00:16:46.769 "thread": "nvmf_tgt_poll_group_000", 00:16:46.769 "listen_address": { 00:16:46.769 "trtype": "TCP", 00:16:46.769 "adrfam": "IPv4", 00:16:46.769 "traddr": "10.0.0.2", 00:16:46.769 "trsvcid": "4420" 00:16:46.769 }, 00:16:46.769 "peer_address": { 00:16:46.769 "trtype": "TCP", 00:16:46.769 "adrfam": "IPv4", 00:16:46.769 "traddr": "10.0.0.1", 00:16:46.769 "trsvcid": "43838" 00:16:46.769 }, 00:16:46.769 "auth": { 00:16:46.769 "state": "completed", 00:16:46.769 "digest": "sha384", 00:16:46.769 "dhgroup": "ffdhe6144" 00:16:46.769 } 00:16:46.769 } 00:16:46.769 ]' 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.769 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.026 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.026 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.026 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.026 18:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.592 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.851 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.109 00:16:48.109 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.109 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.109 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.368 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.368 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.368 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.368 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.368 18:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.368 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.368 { 00:16:48.368 "cntlid": 83, 00:16:48.368 "qid": 0, 00:16:48.368 "state": "enabled", 00:16:48.368 "thread": "nvmf_tgt_poll_group_000", 00:16:48.368 "listen_address": { 00:16:48.368 "trtype": "TCP", 00:16:48.368 "adrfam": "IPv4", 00:16:48.368 "traddr": "10.0.0.2", 00:16:48.368 "trsvcid": "4420" 00:16:48.368 }, 00:16:48.368 "peer_address": { 00:16:48.368 "trtype": "TCP", 00:16:48.368 "adrfam": "IPv4", 00:16:48.368 "traddr": "10.0.0.1", 00:16:48.368 "trsvcid": "43878" 00:16:48.368 }, 00:16:48.368 "auth": { 00:16:48.368 "state": "completed", 00:16:48.368 "digest": "sha384", 00:16:48.368 "dhgroup": "ffdhe6144" 00:16:48.368 } 00:16:48.368 } 00:16:48.368 ]' 00:16:48.368 18:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.368 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.368 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.368 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.368 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.627 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.627 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.627 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.627 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.194 18:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.476 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.735 00:16:49.735 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.735 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.735 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.994 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.994 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.994 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.994 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.994 18:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.994 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.994 { 00:16:49.995 "cntlid": 85, 00:16:49.995 "qid": 0, 00:16:49.995 "state": "enabled", 00:16:49.995 "thread": "nvmf_tgt_poll_group_000", 00:16:49.995 "listen_address": { 00:16:49.995 "trtype": "TCP", 00:16:49.995 "adrfam": "IPv4", 00:16:49.995 "traddr": "10.0.0.2", 00:16:49.995 "trsvcid": "4420" 00:16:49.995 }, 00:16:49.995 "peer_address": { 00:16:49.995 "trtype": "TCP", 00:16:49.995 "adrfam": "IPv4", 00:16:49.995 "traddr": "10.0.0.1", 00:16:49.995 "trsvcid": "43900" 00:16:49.995 }, 00:16:49.995 "auth": { 00:16:49.995 "state": "completed", 00:16:49.995 "digest": "sha384", 00:16:49.995 "dhgroup": "ffdhe6144" 00:16:49.995 } 00:16:49.995 } 00:16:49.995 ]' 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.995 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.253 18:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:50.819 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.076 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:51.076 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.076 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.076 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:51.076 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:51.076 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.077 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.077 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.077 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.077 18:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.077 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.077 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.335 00:16:51.335 18:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.335 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.335 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.593 { 00:16:51.593 "cntlid": 87, 00:16:51.593 "qid": 0, 00:16:51.593 "state": "enabled", 00:16:51.593 "thread": "nvmf_tgt_poll_group_000", 00:16:51.593 "listen_address": { 00:16:51.593 "trtype": "TCP", 00:16:51.593 "adrfam": "IPv4", 00:16:51.593 "traddr": "10.0.0.2", 00:16:51.593 "trsvcid": "4420" 00:16:51.593 }, 00:16:51.593 "peer_address": { 00:16:51.593 "trtype": "TCP", 00:16:51.593 "adrfam": "IPv4", 00:16:51.593 "traddr": "10.0.0.1", 00:16:51.593 "trsvcid": "52906" 00:16:51.593 }, 00:16:51.593 "auth": { 00:16:51.593 "state": "completed", 00:16:51.593 "digest": "sha384", 00:16:51.593 "dhgroup": "ffdhe6144" 00:16:51.593 } 00:16:51.593 } 00:16:51.593 ]' 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.593 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.851 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.851 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.851 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.851 18:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.418 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.676 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.241 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.241 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.241 { 00:16:53.241 "cntlid": 89, 00:16:53.241 "qid": 0, 00:16:53.241 "state": "enabled", 00:16:53.241 "thread": "nvmf_tgt_poll_group_000", 00:16:53.241 "listen_address": { 00:16:53.241 "trtype": "TCP", 00:16:53.241 "adrfam": "IPv4", 00:16:53.241 "traddr": "10.0.0.2", 00:16:53.241 "trsvcid": "4420" 00:16:53.241 }, 00:16:53.241 "peer_address": { 00:16:53.241 "trtype": "TCP", 00:16:53.241 "adrfam": "IPv4", 00:16:53.241 "traddr": "10.0.0.1", 00:16:53.241 "trsvcid": "52930" 00:16:53.241 }, 00:16:53.241 "auth": { 00:16:53.241 "state": "completed", 00:16:53.241 "digest": "sha384", 00:16:53.241 "dhgroup": "ffdhe8192" 00:16:53.241 } 00:16:53.241 } 00:16:53.241 ]' 00:16:53.499 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.499 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.499 18:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.499 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.499 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.499 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.499 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.499 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.757 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.323 18:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.323 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.324 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.324 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.324 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.890 00:16:54.890 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.890 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.890 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.148 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.148 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.148 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.148 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.148 18:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.148 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.148 { 00:16:55.148 "cntlid": 91, 00:16:55.148 "qid": 0, 00:16:55.148 "state": "enabled", 00:16:55.148 "thread": "nvmf_tgt_poll_group_000", 00:16:55.148 "listen_address": { 00:16:55.148 "trtype": "TCP", 00:16:55.148 "adrfam": "IPv4", 00:16:55.149 "traddr": "10.0.0.2", 00:16:55.149 "trsvcid": "4420" 00:16:55.149 }, 00:16:55.149 "peer_address": { 00:16:55.149 "trtype": "TCP", 00:16:55.149 "adrfam": "IPv4", 00:16:55.149 "traddr": "10.0.0.1", 00:16:55.149 "trsvcid": "52956" 00:16:55.149 }, 00:16:55.149 "auth": { 00:16:55.149 "state": "completed", 00:16:55.149 "digest": "sha384", 00:16:55.149 "dhgroup": "ffdhe8192" 00:16:55.149 } 00:16:55.149 } 00:16:55.149 ]' 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.149 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.407 18:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.975 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.235 18:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.801 00:16:56.801 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.801 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.801 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.801 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.801 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.801 18:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.802 { 00:16:56.802 "cntlid": 93, 00:16:56.802 "qid": 0, 00:16:56.802 "state": "enabled", 00:16:56.802 "thread": "nvmf_tgt_poll_group_000", 00:16:56.802 "listen_address": { 00:16:56.802 "trtype": "TCP", 00:16:56.802 "adrfam": "IPv4", 00:16:56.802 "traddr": "10.0.0.2", 00:16:56.802 "trsvcid": "4420" 00:16:56.802 }, 00:16:56.802 "peer_address": { 00:16:56.802 "trtype": "TCP", 00:16:56.802 "adrfam": "IPv4", 00:16:56.802 "traddr": "10.0.0.1", 00:16:56.802 "trsvcid": "52982" 00:16:56.802 }, 00:16:56.802 "auth": { 00:16:56.802 "state": "completed", 00:16:56.802 "digest": "sha384", 00:16:56.802 "dhgroup": "ffdhe8192" 00:16:56.802 } 00:16:56.802 } 00:16:56.802 ]' 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.802 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.060 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.060 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.060 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.060 18:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.656 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.915 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.482 00:16:58.482 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.482 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.482 18:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.482 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.482 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.482 18:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.482 18:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.482 18:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.482 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.482 { 00:16:58.482 "cntlid": 95, 00:16:58.482 "qid": 0, 00:16:58.482 "state": "enabled", 00:16:58.482 "thread": "nvmf_tgt_poll_group_000", 00:16:58.482 "listen_address": { 00:16:58.482 "trtype": "TCP", 00:16:58.482 "adrfam": "IPv4", 00:16:58.482 "traddr": "10.0.0.2", 00:16:58.482 "trsvcid": "4420" 00:16:58.482 }, 00:16:58.482 "peer_address": { 00:16:58.482 "trtype": "TCP", 00:16:58.482 "adrfam": "IPv4", 00:16:58.482 "traddr": "10.0.0.1", 00:16:58.482 "trsvcid": "53002" 00:16:58.482 }, 00:16:58.482 "auth": { 00:16:58.482 "state": "completed", 00:16:58.482 "digest": "sha384", 00:16:58.482 "dhgroup": "ffdhe8192" 00:16:58.482 } 00:16:58.482 } 00:16:58.482 ]' 00:16:58.482 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.740 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.740 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.740 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.740 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.740 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.740 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.740 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.999 18:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.566 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.567 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.567 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.567 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.567 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.567 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.825 00:16:59.825 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.825 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.825 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.083 { 00:17:00.083 "cntlid": 97, 00:17:00.083 "qid": 0, 00:17:00.083 "state": "enabled", 00:17:00.083 "thread": "nvmf_tgt_poll_group_000", 00:17:00.083 "listen_address": { 00:17:00.083 "trtype": "TCP", 00:17:00.083 "adrfam": "IPv4", 00:17:00.083 "traddr": "10.0.0.2", 00:17:00.083 "trsvcid": "4420" 00:17:00.083 }, 00:17:00.083 "peer_address": { 00:17:00.083 "trtype": "TCP", 00:17:00.083 "adrfam": "IPv4", 00:17:00.083 "traddr": "10.0.0.1", 00:17:00.083 "trsvcid": "53044" 00:17:00.083 }, 00:17:00.083 "auth": { 00:17:00.083 "state": "completed", 00:17:00.083 "digest": "sha512", 00:17:00.083 "dhgroup": "null" 00:17:00.083 } 00:17:00.083 } 00:17:00.083 ]' 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.083 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.342 18:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.909 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.167 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.424 00:17:01.425 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.425 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.425 18:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.425 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.425 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.425 18:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.425 18:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.425 18:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.425 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.425 { 00:17:01.425 "cntlid": 99, 00:17:01.425 "qid": 0, 00:17:01.425 "state": "enabled", 00:17:01.425 "thread": "nvmf_tgt_poll_group_000", 00:17:01.425 "listen_address": { 00:17:01.425 "trtype": "TCP", 00:17:01.425 "adrfam": "IPv4", 00:17:01.425 "traddr": "10.0.0.2", 00:17:01.425 "trsvcid": "4420" 00:17:01.425 }, 00:17:01.425 "peer_address": { 00:17:01.425 "trtype": "TCP", 00:17:01.425 "adrfam": "IPv4", 00:17:01.425 "traddr": "10.0.0.1", 00:17:01.425 "trsvcid": "43090" 00:17:01.425 }, 00:17:01.425 "auth": { 00:17:01.425 "state": "completed", 00:17:01.425 "digest": "sha512", 00:17:01.425 "dhgroup": "null" 00:17:01.425 } 00:17:01.425 } 00:17:01.425 ]' 00:17:01.425 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.682 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.682 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.683 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:01.683 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.683 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.683 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.683 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.940 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:17:02.506 18:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.506 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.506 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.506 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.506 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.506 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.506 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.506 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.765 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.765 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.023 { 00:17:03.023 "cntlid": 101, 00:17:03.023 "qid": 0, 00:17:03.023 "state": "enabled", 00:17:03.023 "thread": "nvmf_tgt_poll_group_000", 00:17:03.023 "listen_address": { 00:17:03.023 "trtype": "TCP", 00:17:03.023 "adrfam": "IPv4", 00:17:03.023 "traddr": "10.0.0.2", 00:17:03.023 "trsvcid": "4420" 00:17:03.023 }, 00:17:03.023 "peer_address": { 00:17:03.023 "trtype": "TCP", 00:17:03.023 "adrfam": "IPv4", 00:17:03.023 "traddr": "10.0.0.1", 00:17:03.023 "trsvcid": "43106" 00:17:03.023 }, 00:17:03.023 "auth": { 00:17:03.023 "state": "completed", 00:17:03.023 "digest": "sha512", 00:17:03.023 "dhgroup": "null" 00:17:03.023 } 00:17:03.023 } 00:17:03.023 ]' 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:03.023 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.281 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.281 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.281 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.281 18:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:03.845 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.103 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.361 00:17:04.361 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.361 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.361 18:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.619 { 00:17:04.619 "cntlid": 103, 00:17:04.619 "qid": 0, 00:17:04.619 "state": "enabled", 00:17:04.619 "thread": "nvmf_tgt_poll_group_000", 00:17:04.619 "listen_address": { 00:17:04.619 "trtype": "TCP", 00:17:04.619 "adrfam": "IPv4", 00:17:04.619 "traddr": "10.0.0.2", 00:17:04.619 "trsvcid": "4420" 00:17:04.619 }, 00:17:04.619 "peer_address": { 00:17:04.619 "trtype": "TCP", 00:17:04.619 "adrfam": "IPv4", 00:17:04.619 "traddr": "10.0.0.1", 00:17:04.619 "trsvcid": "43134" 00:17:04.619 }, 00:17:04.619 "auth": { 00:17:04.619 "state": "completed", 00:17:04.619 "digest": "sha512", 00:17:04.619 "dhgroup": "null" 00:17:04.619 } 00:17:04.619 } 00:17:04.619 ]' 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.619 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.620 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.878 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:17:05.444 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.445 18:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.704 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.704 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.962 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.962 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.962 18:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.962 18:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.962 18:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.962 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.962 { 00:17:05.962 "cntlid": 105, 00:17:05.962 "qid": 0, 00:17:05.962 "state": "enabled", 00:17:05.962 "thread": "nvmf_tgt_poll_group_000", 00:17:05.962 "listen_address": { 00:17:05.962 "trtype": "TCP", 00:17:05.962 "adrfam": "IPv4", 00:17:05.962 "traddr": "10.0.0.2", 00:17:05.962 "trsvcid": "4420" 00:17:05.962 }, 00:17:05.962 "peer_address": { 00:17:05.962 "trtype": "TCP", 00:17:05.962 "adrfam": "IPv4", 00:17:05.962 "traddr": "10.0.0.1", 00:17:05.962 "trsvcid": "43160" 00:17:05.962 }, 00:17:05.962 "auth": { 00:17:05.962 "state": "completed", 00:17:05.962 "digest": "sha512", 00:17:05.962 "dhgroup": "ffdhe2048" 00:17:05.962 } 00:17:05.962 } 00:17:05.963 ]' 00:17:05.963 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.963 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.963 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.963 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.963 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.221 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.221 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.221 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.221 18:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.786 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.044 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.302 00:17:07.302 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.302 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.302 18:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.560 { 00:17:07.560 "cntlid": 107, 00:17:07.560 "qid": 0, 00:17:07.560 "state": "enabled", 00:17:07.560 "thread": "nvmf_tgt_poll_group_000", 00:17:07.560 "listen_address": { 00:17:07.560 "trtype": "TCP", 00:17:07.560 "adrfam": "IPv4", 00:17:07.560 "traddr": "10.0.0.2", 00:17:07.560 "trsvcid": "4420" 00:17:07.560 }, 00:17:07.560 "peer_address": { 00:17:07.560 "trtype": "TCP", 00:17:07.560 "adrfam": "IPv4", 00:17:07.560 "traddr": "10.0.0.1", 00:17:07.560 "trsvcid": "43186" 00:17:07.560 }, 00:17:07.560 "auth": { 00:17:07.560 "state": "completed", 00:17:07.560 "digest": "sha512", 00:17:07.560 "dhgroup": "ffdhe2048" 00:17:07.560 } 00:17:07.560 } 00:17:07.560 ]' 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.560 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.819 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.387 18:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.646 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.646 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.905 { 00:17:08.905 "cntlid": 109, 00:17:08.905 "qid": 0, 00:17:08.905 "state": "enabled", 00:17:08.905 "thread": "nvmf_tgt_poll_group_000", 00:17:08.905 "listen_address": { 00:17:08.905 "trtype": "TCP", 00:17:08.905 "adrfam": "IPv4", 00:17:08.905 "traddr": "10.0.0.2", 00:17:08.905 "trsvcid": "4420" 00:17:08.905 }, 00:17:08.905 "peer_address": { 00:17:08.905 "trtype": "TCP", 00:17:08.905 "adrfam": "IPv4", 00:17:08.905 "traddr": "10.0.0.1", 00:17:08.905 "trsvcid": "43218" 00:17:08.905 }, 00:17:08.905 "auth": { 00:17:08.905 "state": "completed", 00:17:08.905 "digest": "sha512", 00:17:08.905 "dhgroup": "ffdhe2048" 00:17:08.905 } 00:17:08.905 } 00:17:08.905 ]' 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.905 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.164 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.164 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.164 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.164 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.164 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.164 18:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.733 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.991 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.250 00:17:10.250 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.250 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.250 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.509 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.509 18:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.509 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.509 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.509 18:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.509 { 00:17:10.509 "cntlid": 111, 00:17:10.509 "qid": 0, 00:17:10.509 "state": "enabled", 00:17:10.509 "thread": "nvmf_tgt_poll_group_000", 00:17:10.509 "listen_address": { 00:17:10.509 "trtype": "TCP", 00:17:10.509 "adrfam": "IPv4", 00:17:10.509 "traddr": "10.0.0.2", 00:17:10.509 "trsvcid": "4420" 00:17:10.509 }, 00:17:10.509 "peer_address": { 00:17:10.509 "trtype": "TCP", 00:17:10.509 "adrfam": "IPv4", 00:17:10.509 "traddr": "10.0.0.1", 00:17:10.509 "trsvcid": "43256" 00:17:10.509 }, 00:17:10.509 "auth": { 00:17:10.509 "state": "completed", 00:17:10.509 "digest": "sha512", 00:17:10.509 "dhgroup": "ffdhe2048" 00:17:10.509 } 00:17:10.509 } 00:17:10.509 ]' 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.509 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.768 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.336 18:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.336 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.594 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.594 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.594 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.594 00:17:11.594 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.594 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.594 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.853 { 00:17:11.853 "cntlid": 113, 00:17:11.853 "qid": 0, 00:17:11.853 "state": "enabled", 00:17:11.853 "thread": "nvmf_tgt_poll_group_000", 00:17:11.853 "listen_address": { 00:17:11.853 "trtype": "TCP", 00:17:11.853 "adrfam": "IPv4", 00:17:11.853 "traddr": "10.0.0.2", 00:17:11.853 "trsvcid": "4420" 00:17:11.853 }, 00:17:11.853 "peer_address": { 00:17:11.853 "trtype": "TCP", 00:17:11.853 "adrfam": "IPv4", 00:17:11.853 "traddr": "10.0.0.1", 00:17:11.853 "trsvcid": "48138" 00:17:11.853 }, 00:17:11.853 "auth": { 00:17:11.853 "state": "completed", 00:17:11.853 "digest": "sha512", 00:17:11.853 "dhgroup": "ffdhe3072" 00:17:11.853 } 00:17:11.853 } 00:17:11.853 ]' 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.853 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.112 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.112 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.112 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.112 18:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.676 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.972 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.230 00:17:13.230 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.230 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.230 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.488 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.488 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.489 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.489 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.489 18:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.489 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.489 { 00:17:13.489 "cntlid": 115, 00:17:13.489 "qid": 0, 00:17:13.489 "state": "enabled", 00:17:13.489 "thread": "nvmf_tgt_poll_group_000", 00:17:13.489 "listen_address": { 00:17:13.489 "trtype": "TCP", 00:17:13.489 "adrfam": "IPv4", 00:17:13.489 "traddr": "10.0.0.2", 00:17:13.489 "trsvcid": "4420" 00:17:13.489 }, 00:17:13.489 "peer_address": { 00:17:13.489 "trtype": "TCP", 00:17:13.489 "adrfam": "IPv4", 00:17:13.489 "traddr": "10.0.0.1", 00:17:13.489 "trsvcid": "48158" 00:17:13.489 }, 00:17:13.489 "auth": { 00:17:13.489 "state": "completed", 00:17:13.489 "digest": "sha512", 00:17:13.489 "dhgroup": "ffdhe3072" 00:17:13.489 } 00:17:13.489 } 00:17:13.489 ]' 00:17:13.489 18:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.489 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.489 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.489 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.489 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.489 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.489 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.489 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.747 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.360 18:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.637 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.637 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.896 { 00:17:14.896 "cntlid": 117, 00:17:14.896 "qid": 0, 00:17:14.896 "state": "enabled", 00:17:14.896 "thread": "nvmf_tgt_poll_group_000", 00:17:14.896 "listen_address": { 00:17:14.896 "trtype": "TCP", 00:17:14.896 "adrfam": "IPv4", 00:17:14.896 "traddr": "10.0.0.2", 00:17:14.896 "trsvcid": "4420" 00:17:14.896 }, 00:17:14.896 "peer_address": { 00:17:14.896 "trtype": "TCP", 00:17:14.896 "adrfam": "IPv4", 00:17:14.896 "traddr": "10.0.0.1", 00:17:14.896 "trsvcid": "48182" 00:17:14.896 }, 00:17:14.896 "auth": { 00:17:14.896 "state": "completed", 00:17:14.896 "digest": "sha512", 00:17:14.896 "dhgroup": "ffdhe3072" 00:17:14.896 } 00:17:14.896 } 00:17:14.896 ]' 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.896 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.155 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.155 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.155 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.155 18:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.721 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.979 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.980 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.238 00:17:16.238 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.238 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.238 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.497 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.497 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.497 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.497 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.497 18:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.497 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.497 { 00:17:16.497 "cntlid": 119, 00:17:16.497 "qid": 0, 00:17:16.497 "state": "enabled", 00:17:16.497 "thread": "nvmf_tgt_poll_group_000", 00:17:16.497 "listen_address": { 00:17:16.497 "trtype": "TCP", 00:17:16.497 "adrfam": "IPv4", 00:17:16.497 "traddr": "10.0.0.2", 00:17:16.497 "trsvcid": "4420" 00:17:16.497 }, 00:17:16.497 "peer_address": { 00:17:16.497 "trtype": "TCP", 00:17:16.497 "adrfam": "IPv4", 00:17:16.497 "traddr": "10.0.0.1", 00:17:16.497 "trsvcid": "48206" 00:17:16.497 }, 00:17:16.497 "auth": { 00:17:16.497 "state": "completed", 00:17:16.497 "digest": "sha512", 00:17:16.497 "dhgroup": "ffdhe3072" 00:17:16.497 } 00:17:16.497 } 00:17:16.497 ]' 00:17:16.497 18:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.497 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.497 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.497 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.497 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.497 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.497 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.497 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.755 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.323 18:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.323 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.582 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.582 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.582 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.841 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.841 { 00:17:17.841 "cntlid": 121, 00:17:17.841 "qid": 0, 00:17:17.841 "state": "enabled", 00:17:17.841 "thread": "nvmf_tgt_poll_group_000", 00:17:17.841 "listen_address": { 00:17:17.841 "trtype": "TCP", 00:17:17.841 "adrfam": "IPv4", 00:17:17.841 "traddr": "10.0.0.2", 00:17:17.841 "trsvcid": "4420" 00:17:17.841 }, 00:17:17.841 "peer_address": { 00:17:17.841 "trtype": "TCP", 00:17:17.841 "adrfam": "IPv4", 00:17:17.841 "traddr": "10.0.0.1", 00:17:17.841 "trsvcid": "48230" 00:17:17.841 }, 00:17:17.841 "auth": { 00:17:17.841 "state": "completed", 00:17:17.841 "digest": "sha512", 00:17:17.841 "dhgroup": "ffdhe4096" 00:17:17.841 } 00:17:17.841 } 00:17:17.841 ]' 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.841 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.100 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.100 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.100 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.100 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.100 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.358 18:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.924 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.182 00:17:19.182 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.182 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.182 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.440 18:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.440 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.440 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.440 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.440 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.440 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.440 { 00:17:19.440 "cntlid": 123, 00:17:19.440 "qid": 0, 00:17:19.440 "state": "enabled", 00:17:19.440 "thread": "nvmf_tgt_poll_group_000", 00:17:19.440 "listen_address": { 00:17:19.440 "trtype": "TCP", 00:17:19.440 "adrfam": "IPv4", 00:17:19.440 "traddr": "10.0.0.2", 00:17:19.440 "trsvcid": "4420" 00:17:19.440 }, 00:17:19.440 "peer_address": { 00:17:19.440 "trtype": "TCP", 00:17:19.440 "adrfam": "IPv4", 00:17:19.440 "traddr": "10.0.0.1", 00:17:19.440 "trsvcid": "48268" 00:17:19.440 }, 00:17:19.440 "auth": { 00:17:19.440 "state": "completed", 00:17:19.440 "digest": "sha512", 00:17:19.440 "dhgroup": "ffdhe4096" 00:17:19.440 } 00:17:19.440 } 00:17:19.440 ]' 00:17:19.440 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.440 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.441 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.698 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.266 18:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.525 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.785 00:17:20.785 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.785 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.785 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.045 { 00:17:21.045 "cntlid": 125, 00:17:21.045 "qid": 0, 00:17:21.045 "state": "enabled", 00:17:21.045 "thread": "nvmf_tgt_poll_group_000", 00:17:21.045 "listen_address": { 00:17:21.045 "trtype": "TCP", 00:17:21.045 "adrfam": "IPv4", 00:17:21.045 "traddr": "10.0.0.2", 00:17:21.045 "trsvcid": "4420" 00:17:21.045 }, 00:17:21.045 "peer_address": { 00:17:21.045 "trtype": "TCP", 00:17:21.045 "adrfam": "IPv4", 00:17:21.045 "traddr": "10.0.0.1", 00:17:21.045 "trsvcid": "50014" 00:17:21.045 }, 00:17:21.045 "auth": { 00:17:21.045 "state": "completed", 00:17:21.045 "digest": "sha512", 00:17:21.045 "dhgroup": "ffdhe4096" 00:17:21.045 } 00:17:21.045 } 00:17:21.045 ]' 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.045 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.303 18:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.868 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.126 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.126 00:17:22.384 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.384 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.385 18:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.385 { 00:17:22.385 "cntlid": 127, 00:17:22.385 "qid": 0, 00:17:22.385 "state": "enabled", 00:17:22.385 "thread": "nvmf_tgt_poll_group_000", 00:17:22.385 "listen_address": { 00:17:22.385 "trtype": "TCP", 00:17:22.385 "adrfam": "IPv4", 00:17:22.385 "traddr": "10.0.0.2", 00:17:22.385 "trsvcid": "4420" 00:17:22.385 }, 00:17:22.385 "peer_address": { 00:17:22.385 "trtype": "TCP", 00:17:22.385 "adrfam": "IPv4", 00:17:22.385 "traddr": "10.0.0.1", 00:17:22.385 "trsvcid": "50048" 00:17:22.385 }, 00:17:22.385 "auth": { 00:17:22.385 "state": "completed", 00:17:22.385 "digest": "sha512", 00:17:22.385 "dhgroup": "ffdhe4096" 00:17:22.385 } 00:17:22.385 } 00:17:22.385 ]' 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.385 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.643 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.643 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.643 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.643 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.643 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.643 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:17:23.208 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:23.209 18:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.467 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.725 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.983 { 00:17:23.983 "cntlid": 129, 00:17:23.983 "qid": 0, 00:17:23.983 "state": "enabled", 00:17:23.983 "thread": "nvmf_tgt_poll_group_000", 00:17:23.983 "listen_address": { 00:17:23.983 "trtype": "TCP", 00:17:23.983 "adrfam": "IPv4", 00:17:23.983 "traddr": "10.0.0.2", 00:17:23.983 "trsvcid": "4420" 00:17:23.983 }, 00:17:23.983 "peer_address": { 00:17:23.983 "trtype": "TCP", 00:17:23.983 "adrfam": "IPv4", 00:17:23.983 "traddr": "10.0.0.1", 00:17:23.983 "trsvcid": "50078" 00:17:23.983 }, 00:17:23.983 "auth": { 00:17:23.983 "state": "completed", 00:17:23.983 "digest": "sha512", 00:17:23.983 "dhgroup": "ffdhe6144" 00:17:23.983 } 00:17:23.983 } 00:17:23.983 ]' 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.983 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.241 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.241 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.241 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.241 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.241 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.241 18:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.807 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.066 18:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.325 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.584 { 00:17:25.584 "cntlid": 131, 00:17:25.584 "qid": 0, 00:17:25.584 "state": "enabled", 00:17:25.584 "thread": "nvmf_tgt_poll_group_000", 00:17:25.584 "listen_address": { 00:17:25.584 "trtype": "TCP", 00:17:25.584 "adrfam": "IPv4", 00:17:25.584 "traddr": "10.0.0.2", 00:17:25.584 "trsvcid": "4420" 00:17:25.584 }, 00:17:25.584 "peer_address": { 00:17:25.584 "trtype": "TCP", 00:17:25.584 "adrfam": "IPv4", 00:17:25.584 "traddr": "10.0.0.1", 00:17:25.584 "trsvcid": "50114" 00:17:25.584 }, 00:17:25.584 "auth": { 00:17:25.584 "state": "completed", 00:17:25.584 "digest": "sha512", 00:17:25.584 "dhgroup": "ffdhe6144" 00:17:25.584 } 00:17:25.584 } 00:17:25.584 ]' 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.584 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.842 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.842 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.842 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.842 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.842 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.842 18:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:17:26.410 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.668 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.939 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.197 { 00:17:27.197 "cntlid": 133, 00:17:27.197 "qid": 0, 00:17:27.197 "state": "enabled", 00:17:27.197 "thread": "nvmf_tgt_poll_group_000", 00:17:27.197 "listen_address": { 00:17:27.197 "trtype": "TCP", 00:17:27.197 "adrfam": "IPv4", 00:17:27.197 "traddr": "10.0.0.2", 00:17:27.197 "trsvcid": "4420" 00:17:27.197 }, 00:17:27.197 "peer_address": { 00:17:27.197 "trtype": "TCP", 00:17:27.197 "adrfam": "IPv4", 00:17:27.197 "traddr": "10.0.0.1", 00:17:27.197 "trsvcid": "50148" 00:17:27.197 }, 00:17:27.197 "auth": { 00:17:27.197 "state": "completed", 00:17:27.197 "digest": "sha512", 00:17:27.197 "dhgroup": "ffdhe6144" 00:17:27.197 } 00:17:27.197 } 00:17:27.197 ]' 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.197 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.455 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.455 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.455 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.455 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.455 18:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.455 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:17:28.022 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.022 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.022 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.022 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.281 18:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.540 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.799 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.799 { 00:17:28.799 "cntlid": 135, 00:17:28.799 "qid": 0, 00:17:28.799 "state": "enabled", 00:17:28.799 "thread": "nvmf_tgt_poll_group_000", 00:17:28.799 "listen_address": { 00:17:28.799 "trtype": "TCP", 00:17:28.799 "adrfam": "IPv4", 00:17:28.799 "traddr": "10.0.0.2", 00:17:28.799 "trsvcid": "4420" 00:17:28.799 }, 00:17:28.799 "peer_address": { 00:17:28.799 "trtype": "TCP", 00:17:28.800 "adrfam": "IPv4", 00:17:28.800 "traddr": "10.0.0.1", 00:17:28.800 "trsvcid": "50192" 00:17:28.800 }, 00:17:28.800 "auth": { 00:17:28.800 "state": "completed", 00:17:28.800 "digest": "sha512", 00:17:28.800 "dhgroup": "ffdhe6144" 00:17:28.800 } 00:17:28.800 } 00:17:28.800 ]' 00:17:28.800 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.800 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.800 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.058 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.058 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.058 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.058 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.058 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.058 18:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.625 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.885 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.452 00:17:30.452 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.452 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.452 18:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.452 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.452 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.452 18:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.452 18:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.452 18:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.452 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.452 { 00:17:30.452 "cntlid": 137, 00:17:30.452 "qid": 0, 00:17:30.452 "state": "enabled", 00:17:30.452 "thread": "nvmf_tgt_poll_group_000", 00:17:30.452 "listen_address": { 00:17:30.452 "trtype": "TCP", 00:17:30.452 "adrfam": "IPv4", 00:17:30.452 "traddr": "10.0.0.2", 00:17:30.452 "trsvcid": "4420" 00:17:30.452 }, 00:17:30.452 "peer_address": { 00:17:30.452 "trtype": "TCP", 00:17:30.452 "adrfam": "IPv4", 00:17:30.452 "traddr": "10.0.0.1", 00:17:30.452 "trsvcid": "50216" 00:17:30.452 }, 00:17:30.452 "auth": { 00:17:30.452 "state": "completed", 00:17:30.452 "digest": "sha512", 00:17:30.452 "dhgroup": "ffdhe8192" 00:17:30.452 } 00:17:30.452 } 00:17:30.452 ]' 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.711 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.002 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:17:31.569 18:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.569 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.135 00:17:32.135 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.135 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.135 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.393 { 00:17:32.393 "cntlid": 139, 00:17:32.393 "qid": 0, 00:17:32.393 "state": "enabled", 00:17:32.393 "thread": "nvmf_tgt_poll_group_000", 00:17:32.393 "listen_address": { 00:17:32.393 "trtype": "TCP", 00:17:32.393 "adrfam": "IPv4", 00:17:32.393 "traddr": "10.0.0.2", 00:17:32.393 "trsvcid": "4420" 00:17:32.393 }, 00:17:32.393 "peer_address": { 00:17:32.393 "trtype": "TCP", 00:17:32.393 "adrfam": "IPv4", 00:17:32.393 "traddr": "10.0.0.1", 00:17:32.393 "trsvcid": "60590" 00:17:32.393 }, 00:17:32.393 "auth": { 00:17:32.393 "state": "completed", 00:17:32.393 "digest": "sha512", 00:17:32.393 "dhgroup": "ffdhe8192" 00:17:32.393 } 00:17:32.393 } 00:17:32.393 ]' 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.393 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.394 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.394 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.394 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.394 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.394 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.394 18:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.651 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTczYTdlYWRmNmFiMWE3ZTBjMTE5ZWEwYTE0N2VkNWJV/FZF: --dhchap-ctrl-secret DHHC-1:02:OTBkODE1MmRmMmU0NjY5NGY1YjE2YTgzN2U5OWQyZGMxYWYxZjZkNzRhNjRkMDM2c3wcMw==: 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.267 18:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.833 00:17:33.833 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.833 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.833 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.092 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.092 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.092 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.092 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.092 18:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.092 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.092 { 00:17:34.092 "cntlid": 141, 00:17:34.092 "qid": 0, 00:17:34.092 "state": "enabled", 00:17:34.092 "thread": "nvmf_tgt_poll_group_000", 00:17:34.092 "listen_address": { 00:17:34.092 "trtype": "TCP", 00:17:34.092 "adrfam": "IPv4", 00:17:34.092 "traddr": "10.0.0.2", 00:17:34.092 "trsvcid": "4420" 00:17:34.092 }, 00:17:34.092 "peer_address": { 00:17:34.092 "trtype": "TCP", 00:17:34.092 "adrfam": "IPv4", 00:17:34.092 "traddr": "10.0.0.1", 00:17:34.092 "trsvcid": "60618" 00:17:34.092 }, 00:17:34.092 "auth": { 00:17:34.092 "state": "completed", 00:17:34.092 "digest": "sha512", 00:17:34.092 "dhgroup": "ffdhe8192" 00:17:34.092 } 00:17:34.092 } 00:17:34.092 ]' 00:17:34.092 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.093 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.093 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.093 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.093 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.093 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.093 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.093 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.351 18:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NGM2MDU1MTVmN2ZhNjMxODlmYTA2MTlhODg4YmZkNzBmYmQzZTUxZGFhN2I1ZGRkLrmtFg==: --dhchap-ctrl-secret DHHC-1:01:MmYxNTk5MDU5NmEzOTc1NWMwN2Y2N2I1ODJmMmVlZWN02LXh: 00:17:34.918 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.918 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.918 18:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.918 18:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.918 18:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.919 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.919 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.919 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.178 18:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.437 00:17:35.437 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.437 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.437 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.696 { 00:17:35.696 "cntlid": 143, 00:17:35.696 "qid": 0, 00:17:35.696 "state": "enabled", 00:17:35.696 "thread": "nvmf_tgt_poll_group_000", 00:17:35.696 "listen_address": { 00:17:35.696 "trtype": "TCP", 00:17:35.696 "adrfam": "IPv4", 00:17:35.696 "traddr": "10.0.0.2", 00:17:35.696 "trsvcid": "4420" 00:17:35.696 }, 00:17:35.696 "peer_address": { 00:17:35.696 "trtype": "TCP", 00:17:35.696 "adrfam": "IPv4", 00:17:35.696 "traddr": "10.0.0.1", 00:17:35.696 "trsvcid": "60652" 00:17:35.696 }, 00:17:35.696 "auth": { 00:17:35.696 "state": "completed", 00:17:35.696 "digest": "sha512", 00:17:35.696 "dhgroup": "ffdhe8192" 00:17:35.696 } 00:17:35.696 } 00:17:35.696 ]' 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.696 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.954 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.954 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.954 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.954 18:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.522 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.780 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.347 00:17:37.347 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.347 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.347 18:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.347 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.347 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.347 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.347 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.347 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.347 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.347 { 00:17:37.347 "cntlid": 145, 00:17:37.347 "qid": 0, 00:17:37.347 "state": "enabled", 00:17:37.347 "thread": "nvmf_tgt_poll_group_000", 00:17:37.347 "listen_address": { 00:17:37.347 "trtype": "TCP", 00:17:37.347 "adrfam": "IPv4", 00:17:37.347 "traddr": "10.0.0.2", 00:17:37.347 "trsvcid": "4420" 00:17:37.347 }, 00:17:37.347 "peer_address": { 00:17:37.347 "trtype": "TCP", 00:17:37.347 "adrfam": "IPv4", 00:17:37.347 "traddr": "10.0.0.1", 00:17:37.347 "trsvcid": "60674" 00:17:37.347 }, 00:17:37.347 "auth": { 00:17:37.347 "state": "completed", 00:17:37.347 "digest": "sha512", 00:17:37.347 "dhgroup": "ffdhe8192" 00:17:37.347 } 00:17:37.347 } 00:17:37.347 ]' 00:17:37.347 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.605 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.605 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.605 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.605 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.605 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.605 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.606 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.864 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MjYxZjAxODBhZTM4MzU1YzMwNGUwZTNhYTU0ZjQzZTVkMjRhMGFhZGFkOGUzMGUy0z6AJw==: --dhchap-ctrl-secret DHHC-1:03:M2JiM2FhNDkzYWE2MjYyYTBiZDIyNjA2NTIyODc5MDY2OWY4NzczNjE4ODVhOGQ3NzRhMmEzYzU0ZDdmZTkxZtIpNpg=: 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:38.433 18:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:38.692 request: 00:17:38.692 { 00:17:38.692 "name": "nvme0", 00:17:38.692 "trtype": "tcp", 00:17:38.692 "traddr": "10.0.0.2", 00:17:38.692 "adrfam": "ipv4", 00:17:38.692 "trsvcid": "4420", 00:17:38.692 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.692 "prchk_reftag": false, 00:17:38.692 "prchk_guard": false, 00:17:38.692 "hdgst": false, 00:17:38.692 "ddgst": false, 00:17:38.692 "dhchap_key": "key2", 00:17:38.692 "method": "bdev_nvme_attach_controller", 00:17:38.692 "req_id": 1 00:17:38.692 } 00:17:38.692 Got JSON-RPC error response 00:17:38.692 response: 00:17:38.692 { 00:17:38.692 "code": -5, 00:17:38.692 "message": "Input/output error" 00:17:38.692 } 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.692 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.260 request: 00:17:39.260 { 00:17:39.260 "name": "nvme0", 00:17:39.260 "trtype": "tcp", 00:17:39.260 "traddr": "10.0.0.2", 00:17:39.260 "adrfam": "ipv4", 00:17:39.260 "trsvcid": "4420", 00:17:39.260 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.260 "prchk_reftag": false, 00:17:39.260 "prchk_guard": false, 00:17:39.260 "hdgst": false, 00:17:39.260 "ddgst": false, 00:17:39.260 "dhchap_key": "key1", 00:17:39.260 "dhchap_ctrlr_key": "ckey2", 00:17:39.260 "method": "bdev_nvme_attach_controller", 00:17:39.260 "req_id": 1 00:17:39.260 } 00:17:39.260 Got JSON-RPC error response 00:17:39.260 response: 00:17:39.260 { 00:17:39.260 "code": -5, 00:17:39.260 "message": "Input/output error" 00:17:39.260 } 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.260 18:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.829 request: 00:17:39.829 { 00:17:39.829 "name": "nvme0", 00:17:39.829 "trtype": "tcp", 00:17:39.829 "traddr": "10.0.0.2", 00:17:39.829 "adrfam": "ipv4", 00:17:39.829 "trsvcid": "4420", 00:17:39.829 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.829 "prchk_reftag": false, 00:17:39.829 "prchk_guard": false, 00:17:39.829 "hdgst": false, 00:17:39.829 "ddgst": false, 00:17:39.829 "dhchap_key": "key1", 00:17:39.829 "dhchap_ctrlr_key": "ckey1", 00:17:39.829 "method": "bdev_nvme_attach_controller", 00:17:39.829 "req_id": 1 00:17:39.829 } 00:17:39.829 Got JSON-RPC error response 00:17:39.829 response: 00:17:39.829 { 00:17:39.829 "code": -5, 00:17:39.829 "message": "Input/output error" 00:17:39.829 } 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1086110 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1086110 ']' 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1086110 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086110 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086110' 00:17:39.829 killing process with pid 1086110 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1086110 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1086110 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1106819 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1106819 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1106819 ']' 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.829 18:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1106819 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1106819 ']' 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.765 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.023 18:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.590 00:17:41.590 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.590 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.590 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.849 { 00:17:41.849 "cntlid": 1, 00:17:41.849 "qid": 0, 00:17:41.849 "state": "enabled", 00:17:41.849 "thread": "nvmf_tgt_poll_group_000", 00:17:41.849 "listen_address": { 00:17:41.849 "trtype": "TCP", 00:17:41.849 "adrfam": "IPv4", 00:17:41.849 "traddr": "10.0.0.2", 00:17:41.849 "trsvcid": "4420" 00:17:41.849 }, 00:17:41.849 "peer_address": { 00:17:41.849 "trtype": "TCP", 00:17:41.849 "adrfam": "IPv4", 00:17:41.849 "traddr": "10.0.0.1", 00:17:41.849 "trsvcid": "40334" 00:17:41.849 }, 00:17:41.849 "auth": { 00:17:41.849 "state": "completed", 00:17:41.849 "digest": "sha512", 00:17:41.849 "dhgroup": "ffdhe8192" 00:17:41.849 } 00:17:41.849 } 00:17:41.849 ]' 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.849 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.106 18:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YmVjZjAyZDlmZTQyMTczNzJkNmI0OTA0MzJlYmU1M2I1OTdlYmRhOWQ1ODI4OWU1MDU4YzMwZjliZjQ4ZTFlNyVaiZA=: 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:42.671 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.929 request: 00:17:42.929 { 00:17:42.929 "name": "nvme0", 00:17:42.929 "trtype": "tcp", 00:17:42.929 "traddr": "10.0.0.2", 00:17:42.929 "adrfam": "ipv4", 00:17:42.929 "trsvcid": "4420", 00:17:42.929 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.929 "prchk_reftag": false, 00:17:42.929 "prchk_guard": false, 00:17:42.929 "hdgst": false, 00:17:42.929 "ddgst": false, 00:17:42.929 "dhchap_key": "key3", 00:17:42.929 "method": "bdev_nvme_attach_controller", 00:17:42.929 "req_id": 1 00:17:42.929 } 00:17:42.929 Got JSON-RPC error response 00:17:42.929 response: 00:17:42.929 { 00:17:42.929 "code": -5, 00:17:42.929 "message": "Input/output error" 00:17:42.929 } 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:42.929 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.188 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.446 request: 00:17:43.446 { 00:17:43.446 "name": "nvme0", 00:17:43.446 "trtype": "tcp", 00:17:43.446 "traddr": "10.0.0.2", 00:17:43.446 "adrfam": "ipv4", 00:17:43.446 "trsvcid": "4420", 00:17:43.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.446 "prchk_reftag": false, 00:17:43.446 "prchk_guard": false, 00:17:43.446 "hdgst": false, 00:17:43.446 "ddgst": false, 00:17:43.446 "dhchap_key": "key3", 00:17:43.446 "method": "bdev_nvme_attach_controller", 00:17:43.446 "req_id": 1 00:17:43.446 } 00:17:43.446 Got JSON-RPC error response 00:17:43.446 response: 00:17:43.446 { 00:17:43.446 "code": -5, 00:17:43.446 "message": "Input/output error" 00:17:43.446 } 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.446 18:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.446 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:43.447 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.447 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:43.447 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.447 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:43.447 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.447 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.447 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:43.705 request: 00:17:43.705 { 00:17:43.705 "name": "nvme0", 00:17:43.705 "trtype": "tcp", 00:17:43.705 "traddr": "10.0.0.2", 00:17:43.705 "adrfam": "ipv4", 00:17:43.705 "trsvcid": "4420", 00:17:43.705 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.705 "prchk_reftag": false, 00:17:43.705 "prchk_guard": false, 00:17:43.705 "hdgst": false, 00:17:43.705 "ddgst": false, 00:17:43.705 "dhchap_key": "key0", 00:17:43.705 "dhchap_ctrlr_key": "key1", 00:17:43.705 "method": "bdev_nvme_attach_controller", 00:17:43.705 "req_id": 1 00:17:43.705 } 00:17:43.705 Got JSON-RPC error response 00:17:43.705 response: 00:17:43.705 { 00:17:43.705 "code": -5, 00:17:43.705 "message": "Input/output error" 00:17:43.705 } 00:17:43.705 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:43.705 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.705 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.705 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.705 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:43.705 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:43.963 00:17:43.963 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:43.963 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:43.963 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.220 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.220 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.220 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1086350 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1086350 ']' 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1086350 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086350 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:44.477 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:44.478 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086350' 00:17:44.478 killing process with pid 1086350 00:17:44.478 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1086350 00:17:44.478 18:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1086350 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:44.735 rmmod nvme_tcp 00:17:44.735 rmmod nvme_fabrics 00:17:44.735 rmmod nvme_keyring 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1106819 ']' 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1106819 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1106819 ']' 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1106819 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1106819 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1106819' 00:17:44.735 killing process with pid 1106819 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1106819 00:17:44.735 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1106819 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.994 18:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.529 18:43:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:47.529 18:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vIi /tmp/spdk.key-sha256.8oD /tmp/spdk.key-sha384.ls0 /tmp/spdk.key-sha512.dEi /tmp/spdk.key-sha512.ReZ /tmp/spdk.key-sha384.Woz /tmp/spdk.key-sha256.Mhr '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:47.529 00:17:47.529 real 2m10.222s 00:17:47.529 user 4m59.796s 00:17:47.529 sys 0m20.118s 00:17:47.529 18:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.529 18:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.529 ************************************ 00:17:47.529 END TEST nvmf_auth_target 00:17:47.529 ************************************ 00:17:47.529 18:43:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:47.529 18:43:03 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:47.529 18:43:03 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:47.529 18:43:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:47.529 18:43:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.529 18:43:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.529 ************************************ 00:17:47.529 START TEST nvmf_bdevio_no_huge 00:17:47.529 ************************************ 00:17:47.529 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:47.529 * Looking for test storage... 00:17:47.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.529 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.529 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.530 18:43:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:52.853 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:52.853 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.853 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:52.854 Found net devices under 0000:86:00.0: cvl_0_0 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:52.854 Found net devices under 0000:86:00.1: cvl_0_1 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:52.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:17:52.854 00:17:52.854 --- 10.0.0.2 ping statistics --- 00:17:52.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.854 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:17:52.854 00:17:52.854 --- 10.0.0.1 ping statistics --- 00:17:52.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.854 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.854 18:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1111081 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1111081 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1111081 ']' 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.854 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:52.854 [2024-07-15 18:43:09.052393] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:17:52.854 [2024-07-15 18:43:09.052439] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:52.854 [2024-07-15 18:43:09.115805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.854 [2024-07-15 18:43:09.200748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.854 [2024-07-15 18:43:09.200780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.854 [2024-07-15 18:43:09.200787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.854 [2024-07-15 18:43:09.200793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.854 [2024-07-15 18:43:09.200798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.854 [2024-07-15 18:43:09.201357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:52.854 [2024-07-15 18:43:09.201443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:52.854 [2024-07-15 18:43:09.201591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.854 [2024-07-15 18:43:09.201592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.421 [2024-07-15 18:43:09.912387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.421 Malloc0 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.421 [2024-07-15 18:43:09.956640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:53.421 { 00:17:53.421 "params": { 00:17:53.421 "name": "Nvme$subsystem", 00:17:53.421 "trtype": "$TEST_TRANSPORT", 00:17:53.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.421 "adrfam": "ipv4", 00:17:53.421 "trsvcid": "$NVMF_PORT", 00:17:53.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.421 "hdgst": ${hdgst:-false}, 00:17:53.421 "ddgst": ${ddgst:-false} 00:17:53.421 }, 00:17:53.421 "method": "bdev_nvme_attach_controller" 00:17:53.421 } 00:17:53.421 EOF 00:17:53.421 )") 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:53.421 18:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:53.421 "params": { 00:17:53.421 "name": "Nvme1", 00:17:53.421 "trtype": "tcp", 00:17:53.421 "traddr": "10.0.0.2", 00:17:53.421 "adrfam": "ipv4", 00:17:53.421 "trsvcid": "4420", 00:17:53.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.421 "hdgst": false, 00:17:53.421 "ddgst": false 00:17:53.421 }, 00:17:53.421 "method": "bdev_nvme_attach_controller" 00:17:53.421 }' 00:17:53.421 [2024-07-15 18:43:10.006271] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:17:53.421 [2024-07-15 18:43:10.006321] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1111332 ] 00:17:53.421 [2024-07-15 18:43:10.066882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:53.680 [2024-07-15 18:43:10.154063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.680 [2024-07-15 18:43:10.154159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.680 [2024-07-15 18:43:10.154159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.680 I/O targets: 00:17:53.680 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:53.680 00:17:53.680 00:17:53.680 CUnit - A unit testing framework for C - Version 2.1-3 00:17:53.680 http://cunit.sourceforge.net/ 00:17:53.680 00:17:53.680 00:17:53.680 Suite: bdevio tests on: Nvme1n1 00:17:53.680 Test: blockdev write read block ...passed 00:17:53.939 Test: blockdev write zeroes read block ...passed 00:17:53.939 Test: blockdev write zeroes read no split ...passed 00:17:53.939 Test: blockdev write zeroes read split ...passed 00:17:53.939 Test: blockdev write zeroes read split partial ...passed 00:17:53.939 Test: blockdev reset ...[2024-07-15 18:43:10.502058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.939 [2024-07-15 18:43:10.502118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114c300 (9): Bad file descriptor 00:17:53.939 [2024-07-15 18:43:10.554791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:53.939 passed 00:17:53.939 Test: blockdev write read 8 blocks ...passed 00:17:53.939 Test: blockdev write read size > 128k ...passed 00:17:53.939 Test: blockdev write read invalid size ...passed 00:17:53.939 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:53.939 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:53.939 Test: blockdev write read max offset ...passed 00:17:54.198 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:54.198 Test: blockdev writev readv 8 blocks ...passed 00:17:54.198 Test: blockdev writev readv 30 x 1block ...passed 00:17:54.198 Test: blockdev writev readv block ...passed 00:17:54.198 Test: blockdev writev readv size > 128k ...passed 00:17:54.198 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:54.198 Test: blockdev comparev and writev ...[2024-07-15 18:43:10.768895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.768922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.768936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.768944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.769219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.769234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.769245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.769253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.769521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.769531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.769549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.769831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.769843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.769854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:54.198 [2024-07-15 18:43:10.769861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:54.198 passed 00:17:54.198 Test: blockdev nvme passthru rw ...passed 00:17:54.198 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:43:10.851624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:54.198 [2024-07-15 18:43:10.851641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:54.198 [2024-07-15 18:43:10.851786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:54.199 [2024-07-15 18:43:10.851795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:54.199 [2024-07-15 18:43:10.851939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:54.199 [2024-07-15 18:43:10.851948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:54.199 [2024-07-15 18:43:10.852088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:54.199 [2024-07-15 18:43:10.852096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:54.199 passed 00:17:54.199 Test: blockdev nvme admin passthru ...passed 00:17:54.199 Test: blockdev copy ...passed 00:17:54.199 00:17:54.199 Run Summary: Type Total Ran Passed Failed Inactive 00:17:54.199 suites 1 1 n/a 0 0 00:17:54.199 tests 23 23 23 0 0 00:17:54.199 asserts 152 152 152 0 n/a 00:17:54.199 00:17:54.199 Elapsed time = 1.138 seconds 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.766 rmmod nvme_tcp 00:17:54.766 rmmod nvme_fabrics 00:17:54.766 rmmod nvme_keyring 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1111081 ']' 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1111081 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1111081 ']' 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1111081 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1111081 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1111081' 00:17:54.766 killing process with pid 1111081 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1111081 00:17:54.766 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1111081 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.025 18:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.559 18:43:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:57.559 00:17:57.559 real 0m9.963s 00:17:57.559 user 0m12.920s 00:17:57.559 sys 0m4.790s 00:17:57.559 18:43:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.559 18:43:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:57.559 ************************************ 00:17:57.559 END TEST nvmf_bdevio_no_huge 00:17:57.559 ************************************ 00:17:57.559 18:43:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:57.559 18:43:13 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:57.559 18:43:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:57.559 18:43:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.559 18:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.559 ************************************ 00:17:57.559 START TEST nvmf_tls 00:17:57.559 ************************************ 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:57.559 * Looking for test storage... 00:17:57.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:57.559 18:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:02.837 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:02.837 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.837 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:02.838 Found net devices under 0000:86:00.0: cvl_0_0 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:02.838 Found net devices under 0000:86:00.1: cvl_0_1 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:18:02.838 00:18:02.838 --- 10.0.0.2 ping statistics --- 00:18:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.838 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:18:02.838 00:18:02.838 --- 10.0.0.1 ping statistics --- 00:18:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.838 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1114852 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1114852 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1114852 ']' 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.838 18:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.838 [2024-07-15 18:43:18.890460] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:02.838 [2024-07-15 18:43:18.890502] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.838 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.838 [2024-07-15 18:43:18.948773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.838 [2024-07-15 18:43:19.024661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.838 [2024-07-15 18:43:19.024709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.838 [2024-07-15 18:43:19.024716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.838 [2024-07-15 18:43:19.024722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.838 [2024-07-15 18:43:19.024726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.838 [2024-07-15 18:43:19.024744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:03.097 18:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:03.357 true 00:18:03.357 18:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.357 18:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:03.616 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:03.616 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:03.616 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:03.616 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:03.616 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:03.875 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:03.875 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:03.875 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:04.134 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:04.134 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:04.134 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:04.134 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:04.134 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:04.134 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:04.393 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:04.393 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:04.393 18:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:04.652 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:04.652 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:04.652 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:04.652 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:04.652 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:04.911 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.y7W1Uaasc8 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.uDP8jNS9iK 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.y7W1Uaasc8 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.uDP8jNS9iK 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:05.170 18:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:05.429 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.y7W1Uaasc8 00:18:05.429 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.y7W1Uaasc8 00:18:05.429 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.689 [2024-07-15 18:43:22.245943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.689 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.949 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.949 [2024-07-15 18:43:22.578806] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.949 [2024-07-15 18:43:22.579022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.949 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.209 malloc0 00:18:06.209 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.468 18:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y7W1Uaasc8 00:18:06.468 [2024-07-15 18:43:23.100277] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:06.468 18:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.y7W1Uaasc8 00:18:06.468 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.683 Initializing NVMe Controllers 00:18:18.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:18.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:18.683 Initialization complete. Launching workers. 00:18:18.683 ======================================================== 00:18:18.683 Latency(us) 00:18:18.683 Device Information : IOPS MiB/s Average min max 00:18:18.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16257.56 63.51 3937.07 862.55 205809.80 00:18:18.683 ======================================================== 00:18:18.683 Total : 16257.56 63.51 3937.07 862.55 205809.80 00:18:18.683 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7W1Uaasc8 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.y7W1Uaasc8' 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1117217 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1117217 /var/tmp/bdevperf.sock 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1117217 ']' 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.684 18:43:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.684 [2024-07-15 18:43:33.265490] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:18.684 [2024-07-15 18:43:33.265538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117217 ] 00:18:18.684 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.684 [2024-07-15 18:43:33.315366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.684 [2024-07-15 18:43:33.393520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.684 18:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.684 18:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.684 18:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y7W1Uaasc8 00:18:18.684 [2024-07-15 18:43:34.227256] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.684 [2024-07-15 18:43:34.227343] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.684 TLSTESTn1 00:18:18.684 18:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:18.684 Running I/O for 10 seconds... 00:18:28.707 00:18:28.707 Latency(us) 00:18:28.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.708 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:28.708 Verification LBA range: start 0x0 length 0x2000 00:18:28.708 TLSTESTn1 : 10.03 4265.26 16.66 0.00 0.00 29955.89 5983.72 68841.29 00:18:28.708 =================================================================================================================== 00:18:28.708 Total : 4265.26 16.66 0.00 0.00 29955.89 5983.72 68841.29 00:18:28.708 0 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1117217 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1117217 ']' 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1117217 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117217 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117217' 00:18:28.708 killing process with pid 1117217 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1117217 00:18:28.708 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.708 00:18:28.708 Latency(us) 00:18:28.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.708 =================================================================================================================== 00:18:28.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.708 [2024-07-15 18:43:44.520965] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1117217 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uDP8jNS9iK 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uDP8jNS9iK 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uDP8jNS9iK 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uDP8jNS9iK' 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1119159 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1119159 /var/tmp/bdevperf.sock 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1119159 ']' 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.708 18:43:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.708 [2024-07-15 18:43:44.747457] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:28.708 [2024-07-15 18:43:44.747507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119159 ] 00:18:28.708 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.708 [2024-07-15 18:43:44.797611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.708 [2024-07-15 18:43:44.874367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.967 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.967 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.967 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uDP8jNS9iK 00:18:29.227 [2024-07-15 18:43:45.721046] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.227 [2024-07-15 18:43:45.721113] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:29.227 [2024-07-15 18:43:45.725642] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:29.227 [2024-07-15 18:43:45.726283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3c570 (107): Transport endpoint is not connected 00:18:29.227 [2024-07-15 18:43:45.727275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3c570 (9): Bad file descriptor 00:18:29.227 [2024-07-15 18:43:45.728276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:29.227 [2024-07-15 18:43:45.728285] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:29.227 [2024-07-15 18:43:45.728293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.227 request: 00:18:29.227 { 00:18:29.227 "name": "TLSTEST", 00:18:29.227 "trtype": "tcp", 00:18:29.227 "traddr": "10.0.0.2", 00:18:29.227 "adrfam": "ipv4", 00:18:29.227 "trsvcid": "4420", 00:18:29.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.227 "prchk_reftag": false, 00:18:29.227 "prchk_guard": false, 00:18:29.227 "hdgst": false, 00:18:29.227 "ddgst": false, 00:18:29.227 "psk": "/tmp/tmp.uDP8jNS9iK", 00:18:29.227 "method": "bdev_nvme_attach_controller", 00:18:29.227 "req_id": 1 00:18:29.227 } 00:18:29.227 Got JSON-RPC error response 00:18:29.227 response: 00:18:29.227 { 00:18:29.227 "code": -5, 00:18:29.227 "message": "Input/output error" 00:18:29.227 } 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1119159 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1119159 ']' 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1119159 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119159 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119159' 00:18:29.227 killing process with pid 1119159 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1119159 00:18:29.227 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.227 00:18:29.227 Latency(us) 00:18:29.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.227 =================================================================================================================== 00:18:29.227 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:29.227 [2024-07-15 18:43:45.800363] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:29.227 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1119159 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.y7W1Uaasc8 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.y7W1Uaasc8 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.y7W1Uaasc8 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.y7W1Uaasc8' 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1119320 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1119320 /var/tmp/bdevperf.sock 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1119320 ']' 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.486 18:43:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.486 [2024-07-15 18:43:46.021452] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:29.486 [2024-07-15 18:43:46.021503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119320 ] 00:18:29.486 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.486 [2024-07-15 18:43:46.074036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.486 [2024-07-15 18:43:46.154220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.423 18:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.423 18:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.423 18:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.y7W1Uaasc8 00:18:30.423 [2024-07-15 18:43:46.976031] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.423 [2024-07-15 18:43:46.976102] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:30.423 [2024-07-15 18:43:46.986193] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:30.423 [2024-07-15 18:43:46.986216] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:30.423 [2024-07-15 18:43:46.986267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:30.423 [2024-07-15 18:43:46.986353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2537570 (107): Transport endpoint is not connected 00:18:30.423 [2024-07-15 18:43:46.987316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2537570 (9): Bad file descriptor 00:18:30.423 [2024-07-15 18:43:46.988320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.423 [2024-07-15 18:43:46.988330] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:30.423 [2024-07-15 18:43:46.988339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.423 request: 00:18:30.423 { 00:18:30.423 "name": "TLSTEST", 00:18:30.423 "trtype": "tcp", 00:18:30.423 "traddr": "10.0.0.2", 00:18:30.423 "adrfam": "ipv4", 00:18:30.423 "trsvcid": "4420", 00:18:30.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:30.423 "prchk_reftag": false, 00:18:30.423 "prchk_guard": false, 00:18:30.423 "hdgst": false, 00:18:30.423 "ddgst": false, 00:18:30.423 "psk": "/tmp/tmp.y7W1Uaasc8", 00:18:30.423 "method": "bdev_nvme_attach_controller", 00:18:30.423 "req_id": 1 00:18:30.423 } 00:18:30.423 Got JSON-RPC error response 00:18:30.423 response: 00:18:30.423 { 00:18:30.423 "code": -5, 00:18:30.423 "message": "Input/output error" 00:18:30.423 } 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1119320 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1119320 ']' 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1119320 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119320 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119320' 00:18:30.423 killing process with pid 1119320 00:18:30.423 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1119320 00:18:30.423 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.423 00:18:30.423 Latency(us) 00:18:30.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.423 =================================================================================================================== 00:18:30.423 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.424 [2024-07-15 18:43:47.054732] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:30.424 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1119320 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7W1Uaasc8 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7W1Uaasc8 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.y7W1Uaasc8 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.y7W1Uaasc8' 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1119510 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1119510 /var/tmp/bdevperf.sock 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1119510 ']' 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.683 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.683 [2024-07-15 18:43:47.275010] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:30.683 [2024-07-15 18:43:47.275059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119510 ] 00:18:30.683 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.683 [2024-07-15 18:43:47.325898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.942 [2024-07-15 18:43:47.393066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.942 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.942 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.942 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.y7W1Uaasc8 00:18:30.942 [2024-07-15 18:43:47.642325] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.942 [2024-07-15 18:43:47.642404] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:30.942 [2024-07-15 18:43:47.647862] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:30.942 [2024-07-15 18:43:47.647887] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:30.942 [2024-07-15 18:43:47.647910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:30.942 [2024-07-15 18:43:47.648647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a570 (107): Transport endpoint is not connected 00:18:31.201 [2024-07-15 18:43:47.649640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a570 (9): Bad file descriptor 00:18:31.201 [2024-07-15 18:43:47.650642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:31.201 [2024-07-15 18:43:47.650655] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:31.201 [2024-07-15 18:43:47.650663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:31.201 request: 00:18:31.201 { 00:18:31.201 "name": "TLSTEST", 00:18:31.201 "trtype": "tcp", 00:18:31.201 "traddr": "10.0.0.2", 00:18:31.201 "adrfam": "ipv4", 00:18:31.201 "trsvcid": "4420", 00:18:31.201 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:31.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.201 "prchk_reftag": false, 00:18:31.201 "prchk_guard": false, 00:18:31.201 "hdgst": false, 00:18:31.201 "ddgst": false, 00:18:31.201 "psk": "/tmp/tmp.y7W1Uaasc8", 00:18:31.201 "method": "bdev_nvme_attach_controller", 00:18:31.201 "req_id": 1 00:18:31.201 } 00:18:31.201 Got JSON-RPC error response 00:18:31.201 response: 00:18:31.201 { 00:18:31.201 "code": -5, 00:18:31.201 "message": "Input/output error" 00:18:31.201 } 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1119510 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1119510 ']' 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1119510 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119510 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119510' 00:18:31.201 killing process with pid 1119510 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1119510 00:18:31.201 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.201 00:18:31.201 Latency(us) 00:18:31.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.201 =================================================================================================================== 00:18:31.201 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.201 [2024-07-15 18:43:47.714420] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1119510 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.201 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1119744 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1119744 /var/tmp/bdevperf.sock 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1119744 ']' 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.202 18:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.460 [2024-07-15 18:43:47.933777] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:31.460 [2024-07-15 18:43:47.933824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119744 ] 00:18:31.460 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.460 [2024-07-15 18:43:47.983583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.460 [2024-07-15 18:43:48.050643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:32.395 [2024-07-15 18:43:48.911117] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:32.395 [2024-07-15 18:43:48.912403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22aeaf0 (9): Bad file descriptor 00:18:32.395 [2024-07-15 18:43:48.913401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:32.395 [2024-07-15 18:43:48.913413] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:32.395 [2024-07-15 18:43:48.913422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:32.395 request: 00:18:32.395 { 00:18:32.395 "name": "TLSTEST", 00:18:32.395 "trtype": "tcp", 00:18:32.395 "traddr": "10.0.0.2", 00:18:32.395 "adrfam": "ipv4", 00:18:32.395 "trsvcid": "4420", 00:18:32.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.395 "prchk_reftag": false, 00:18:32.395 "prchk_guard": false, 00:18:32.395 "hdgst": false, 00:18:32.395 "ddgst": false, 00:18:32.395 "method": "bdev_nvme_attach_controller", 00:18:32.395 "req_id": 1 00:18:32.395 } 00:18:32.395 Got JSON-RPC error response 00:18:32.395 response: 00:18:32.395 { 00:18:32.395 "code": -5, 00:18:32.395 "message": "Input/output error" 00:18:32.395 } 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1119744 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1119744 ']' 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1119744 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119744 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119744' 00:18:32.395 killing process with pid 1119744 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1119744 00:18:32.395 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.395 00:18:32.395 Latency(us) 00:18:32.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.395 =================================================================================================================== 00:18:32.395 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.395 18:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1119744 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1114852 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1114852 ']' 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1114852 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1114852 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1114852' 00:18:32.654 killing process with pid 1114852 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1114852 00:18:32.654 [2024-07-15 18:43:49.189125] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:32.654 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1114852 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.rVR2BEBXil 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.rVR2BEBXil 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1119990 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1119990 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1119990 ']' 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.913 18:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.913 [2024-07-15 18:43:49.488485] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:32.913 [2024-07-15 18:43:49.488532] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.913 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.913 [2024-07-15 18:43:49.545714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.171 [2024-07-15 18:43:49.628084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.171 [2024-07-15 18:43:49.628120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.171 [2024-07-15 18:43:49.628127] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.171 [2024-07-15 18:43:49.628132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.171 [2024-07-15 18:43:49.628138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.171 [2024-07-15 18:43:49.628155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.rVR2BEBXil 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rVR2BEBXil 00:18:33.738 18:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.997 [2024-07-15 18:43:50.484015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.997 18:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:33.997 18:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.255 [2024-07-15 18:43:50.820870] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.255 [2024-07-15 18:43:50.821074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.255 18:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.514 malloc0 00:18:34.514 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.514 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rVR2BEBXil 00:18:34.773 [2024-07-15 18:43:51.326421] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rVR2BEBXil 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rVR2BEBXil' 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1120251 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1120251 /var/tmp/bdevperf.sock 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1120251 ']' 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.773 18:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.773 [2024-07-15 18:43:51.388270] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:34.773 [2024-07-15 18:43:51.388316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120251 ] 00:18:34.773 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.773 [2024-07-15 18:43:51.439028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.032 [2024-07-15 18:43:51.517738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.601 18:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.601 18:43:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:35.601 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rVR2BEBXil 00:18:35.860 [2024-07-15 18:43:52.325396] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.860 [2024-07-15 18:43:52.325498] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:35.860 TLSTESTn1 00:18:35.860 18:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.860 Running I/O for 10 seconds... 00:18:45.841 00:18:45.841 Latency(us) 00:18:45.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.841 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.841 Verification LBA range: start 0x0 length 0x2000 00:18:45.841 TLSTESTn1 : 10.02 5678.57 22.18 0.00 0.00 22504.10 4587.52 30545.47 00:18:45.841 =================================================================================================================== 00:18:45.841 Total : 5678.57 22.18 0.00 0.00 22504.10 4587.52 30545.47 00:18:45.841 0 00:18:45.841 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.841 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1120251 00:18:45.841 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1120251 ']' 00:18:45.841 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1120251 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1120251 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1120251' 00:18:46.101 killing process with pid 1120251 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1120251 00:18:46.101 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.101 00:18:46.101 Latency(us) 00:18:46.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.101 =================================================================================================================== 00:18:46.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.101 [2024-07-15 18:44:02.594399] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1120251 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.rVR2BEBXil 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rVR2BEBXil 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rVR2BEBXil 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rVR2BEBXil 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rVR2BEBXil' 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1122104 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1122104 /var/tmp/bdevperf.sock 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1122104 ']' 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.101 18:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.360 [2024-07-15 18:44:02.831653] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:46.360 [2024-07-15 18:44:02.831702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122104 ] 00:18:46.360 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.360 [2024-07-15 18:44:02.882775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.360 [2024-07-15 18:44:02.959776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.927 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.927 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:46.927 18:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rVR2BEBXil 00:18:47.186 [2024-07-15 18:44:03.777463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.186 [2024-07-15 18:44:03.777516] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:47.186 [2024-07-15 18:44:03.777522] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.rVR2BEBXil 00:18:47.186 request: 00:18:47.186 { 00:18:47.186 "name": "TLSTEST", 00:18:47.186 "trtype": "tcp", 00:18:47.186 "traddr": "10.0.0.2", 00:18:47.186 "adrfam": "ipv4", 00:18:47.186 "trsvcid": "4420", 00:18:47.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.186 "prchk_reftag": false, 00:18:47.186 "prchk_guard": false, 00:18:47.186 "hdgst": false, 00:18:47.186 "ddgst": false, 00:18:47.186 "psk": "/tmp/tmp.rVR2BEBXil", 00:18:47.186 "method": "bdev_nvme_attach_controller", 00:18:47.186 "req_id": 1 00:18:47.186 } 00:18:47.186 Got JSON-RPC error response 00:18:47.186 response: 00:18:47.186 { 00:18:47.186 "code": -1, 00:18:47.186 "message": "Operation not permitted" 00:18:47.186 } 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1122104 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1122104 ']' 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1122104 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1122104 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1122104' 00:18:47.186 killing process with pid 1122104 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1122104 00:18:47.186 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.186 00:18:47.186 Latency(us) 00:18:47.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.186 =================================================================================================================== 00:18:47.186 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.186 18:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1122104 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1119990 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1119990 ']' 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1119990 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119990 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119990' 00:18:47.445 killing process with pid 1119990 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1119990 00:18:47.445 [2024-07-15 18:44:04.060776] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:47.445 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1119990 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1122386 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1122386 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1122386 ']' 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.704 18:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.704 [2024-07-15 18:44:04.305459] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:47.704 [2024-07-15 18:44:04.305504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.704 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.704 [2024-07-15 18:44:04.362095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.963 [2024-07-15 18:44:04.433109] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.963 [2024-07-15 18:44:04.433147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.963 [2024-07-15 18:44:04.433154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.963 [2024-07-15 18:44:04.433164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.963 [2024-07-15 18:44:04.433169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.963 [2024-07-15 18:44:04.433208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.rVR2BEBXil 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rVR2BEBXil 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.rVR2BEBXil 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rVR2BEBXil 00:18:48.532 18:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.791 [2024-07-15 18:44:05.296133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.791 18:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.791 18:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.050 [2024-07-15 18:44:05.637005] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.050 [2024-07-15 18:44:05.637176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.050 18:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.310 malloc0 00:18:49.310 18:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.310 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rVR2BEBXil 00:18:49.570 [2024-07-15 18:44:06.150452] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:49.570 [2024-07-15 18:44:06.150476] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:49.570 [2024-07-15 18:44:06.150499] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:49.570 request: 00:18:49.570 { 00:18:49.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.570 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.570 "psk": "/tmp/tmp.rVR2BEBXil", 00:18:49.570 "method": "nvmf_subsystem_add_host", 00:18:49.570 "req_id": 1 00:18:49.570 } 00:18:49.570 Got JSON-RPC error response 00:18:49.570 response: 00:18:49.570 { 00:18:49.570 "code": -32603, 00:18:49.570 "message": "Internal error" 00:18:49.570 } 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1122386 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1122386 ']' 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1122386 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1122386 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1122386' 00:18:49.570 killing process with pid 1122386 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1122386 00:18:49.570 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1122386 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.rVR2BEBXil 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1122820 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1122820 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1122820 ']' 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.829 18:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.829 [2024-07-15 18:44:06.468900] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:49.829 [2024-07-15 18:44:06.468947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.829 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.829 [2024-07-15 18:44:06.523781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.088 [2024-07-15 18:44:06.594259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.088 [2024-07-15 18:44:06.594299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.088 [2024-07-15 18:44:06.594306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.088 [2024-07-15 18:44:06.594311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.088 [2024-07-15 18:44:06.594316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.088 [2024-07-15 18:44:06.594351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.rVR2BEBXil 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rVR2BEBXil 00:18:50.657 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.916 [2024-07-15 18:44:07.452033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.916 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.175 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.175 [2024-07-15 18:44:07.788893] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.175 [2024-07-15 18:44:07.789067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.175 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.446 malloc0 00:18:51.446 18:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rVR2BEBXil 00:18:51.736 [2024-07-15 18:44:08.302259] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1123090 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1123090 /var/tmp/bdevperf.sock 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1123090 ']' 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.736 18:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.736 [2024-07-15 18:44:08.361844] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:51.736 [2024-07-15 18:44:08.361891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123090 ] 00:18:51.736 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.736 [2024-07-15 18:44:08.413625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.995 [2024-07-15 18:44:08.487892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.563 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.563 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:52.563 18:44:09 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rVR2BEBXil 00:18:52.822 [2024-07-15 18:44:09.317544] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.822 [2024-07-15 18:44:09.317615] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:52.822 TLSTESTn1 00:18:52.822 18:44:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:53.082 18:44:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:53.082 "subsystems": [ 00:18:53.082 { 00:18:53.082 "subsystem": "keyring", 00:18:53.082 "config": [] 00:18:53.082 }, 00:18:53.082 { 00:18:53.082 "subsystem": "iobuf", 00:18:53.082 "config": [ 00:18:53.082 { 00:18:53.082 "method": "iobuf_set_options", 00:18:53.082 "params": { 00:18:53.082 "small_pool_count": 8192, 00:18:53.082 "large_pool_count": 1024, 00:18:53.082 "small_bufsize": 8192, 00:18:53.082 "large_bufsize": 135168 00:18:53.082 } 00:18:53.082 } 00:18:53.082 ] 00:18:53.082 }, 00:18:53.082 { 00:18:53.082 "subsystem": "sock", 00:18:53.082 "config": [ 00:18:53.082 { 00:18:53.082 "method": "sock_set_default_impl", 00:18:53.082 "params": { 00:18:53.082 "impl_name": "posix" 00:18:53.082 } 00:18:53.082 }, 00:18:53.082 { 00:18:53.082 "method": "sock_impl_set_options", 00:18:53.082 "params": { 00:18:53.082 "impl_name": "ssl", 00:18:53.082 "recv_buf_size": 4096, 00:18:53.082 "send_buf_size": 4096, 00:18:53.082 "enable_recv_pipe": true, 00:18:53.082 "enable_quickack": false, 00:18:53.082 "enable_placement_id": 0, 00:18:53.082 "enable_zerocopy_send_server": true, 00:18:53.082 "enable_zerocopy_send_client": false, 00:18:53.082 "zerocopy_threshold": 0, 00:18:53.082 "tls_version": 0, 00:18:53.082 "enable_ktls": false 00:18:53.082 } 00:18:53.082 }, 00:18:53.082 { 00:18:53.082 "method": "sock_impl_set_options", 00:18:53.082 "params": { 00:18:53.082 "impl_name": "posix", 00:18:53.082 "recv_buf_size": 2097152, 00:18:53.082 "send_buf_size": 2097152, 00:18:53.082 "enable_recv_pipe": true, 00:18:53.082 "enable_quickack": false, 00:18:53.083 "enable_placement_id": 0, 00:18:53.083 "enable_zerocopy_send_server": true, 00:18:53.083 "enable_zerocopy_send_client": false, 00:18:53.083 "zerocopy_threshold": 0, 00:18:53.083 "tls_version": 0, 00:18:53.083 "enable_ktls": false 00:18:53.083 } 00:18:53.083 } 00:18:53.083 ] 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "subsystem": "vmd", 00:18:53.083 "config": [] 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "subsystem": "accel", 00:18:53.083 "config": [ 00:18:53.083 { 00:18:53.083 "method": "accel_set_options", 00:18:53.083 "params": { 00:18:53.083 "small_cache_size": 128, 00:18:53.083 "large_cache_size": 16, 00:18:53.083 "task_count": 2048, 00:18:53.083 "sequence_count": 2048, 00:18:53.083 "buf_count": 2048 00:18:53.083 } 00:18:53.083 } 00:18:53.083 ] 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "subsystem": "bdev", 00:18:53.083 "config": [ 00:18:53.083 { 00:18:53.083 "method": "bdev_set_options", 00:18:53.083 "params": { 00:18:53.083 "bdev_io_pool_size": 65535, 00:18:53.083 "bdev_io_cache_size": 256, 00:18:53.083 "bdev_auto_examine": true, 00:18:53.083 "iobuf_small_cache_size": 128, 00:18:53.083 "iobuf_large_cache_size": 16 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "bdev_raid_set_options", 00:18:53.083 "params": { 00:18:53.083 "process_window_size_kb": 1024 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "bdev_iscsi_set_options", 00:18:53.083 "params": { 00:18:53.083 "timeout_sec": 30 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "bdev_nvme_set_options", 00:18:53.083 "params": { 00:18:53.083 "action_on_timeout": "none", 00:18:53.083 "timeout_us": 0, 00:18:53.083 "timeout_admin_us": 0, 00:18:53.083 "keep_alive_timeout_ms": 10000, 00:18:53.083 "arbitration_burst": 0, 00:18:53.083 "low_priority_weight": 0, 00:18:53.083 "medium_priority_weight": 0, 00:18:53.083 "high_priority_weight": 0, 00:18:53.083 "nvme_adminq_poll_period_us": 10000, 00:18:53.083 "nvme_ioq_poll_period_us": 0, 00:18:53.083 "io_queue_requests": 0, 00:18:53.083 "delay_cmd_submit": true, 00:18:53.083 "transport_retry_count": 4, 00:18:53.083 "bdev_retry_count": 3, 00:18:53.083 "transport_ack_timeout": 0, 00:18:53.083 "ctrlr_loss_timeout_sec": 0, 00:18:53.083 "reconnect_delay_sec": 0, 00:18:53.083 "fast_io_fail_timeout_sec": 0, 00:18:53.083 "disable_auto_failback": false, 00:18:53.083 "generate_uuids": false, 00:18:53.083 "transport_tos": 0, 00:18:53.083 "nvme_error_stat": false, 00:18:53.083 "rdma_srq_size": 0, 00:18:53.083 "io_path_stat": false, 00:18:53.083 "allow_accel_sequence": false, 00:18:53.083 "rdma_max_cq_size": 0, 00:18:53.083 "rdma_cm_event_timeout_ms": 0, 00:18:53.083 "dhchap_digests": [ 00:18:53.083 "sha256", 00:18:53.083 "sha384", 00:18:53.083 "sha512" 00:18:53.083 ], 00:18:53.083 "dhchap_dhgroups": [ 00:18:53.083 "null", 00:18:53.083 "ffdhe2048", 00:18:53.083 "ffdhe3072", 00:18:53.083 "ffdhe4096", 00:18:53.083 "ffdhe6144", 00:18:53.083 "ffdhe8192" 00:18:53.083 ] 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "bdev_nvme_set_hotplug", 00:18:53.083 "params": { 00:18:53.083 "period_us": 100000, 00:18:53.083 "enable": false 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "bdev_malloc_create", 00:18:53.083 "params": { 00:18:53.083 "name": "malloc0", 00:18:53.083 "num_blocks": 8192, 00:18:53.083 "block_size": 4096, 00:18:53.083 "physical_block_size": 4096, 00:18:53.083 "uuid": "1e87fbc4-e058-4e98-afc7-1b9565527bd3", 00:18:53.083 "optimal_io_boundary": 0 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "bdev_wait_for_examine" 00:18:53.083 } 00:18:53.083 ] 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "subsystem": "nbd", 00:18:53.083 "config": [] 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "subsystem": "scheduler", 00:18:53.083 "config": [ 00:18:53.083 { 00:18:53.083 "method": "framework_set_scheduler", 00:18:53.083 "params": { 00:18:53.083 "name": "static" 00:18:53.083 } 00:18:53.083 } 00:18:53.083 ] 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "subsystem": "nvmf", 00:18:53.083 "config": [ 00:18:53.083 { 00:18:53.083 "method": "nvmf_set_config", 00:18:53.083 "params": { 00:18:53.083 "discovery_filter": "match_any", 00:18:53.083 "admin_cmd_passthru": { 00:18:53.083 "identify_ctrlr": false 00:18:53.083 } 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "nvmf_set_max_subsystems", 00:18:53.083 "params": { 00:18:53.083 "max_subsystems": 1024 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "nvmf_set_crdt", 00:18:53.083 "params": { 00:18:53.083 "crdt1": 0, 00:18:53.083 "crdt2": 0, 00:18:53.083 "crdt3": 0 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "nvmf_create_transport", 00:18:53.083 "params": { 00:18:53.083 "trtype": "TCP", 00:18:53.083 "max_queue_depth": 128, 00:18:53.083 "max_io_qpairs_per_ctrlr": 127, 00:18:53.083 "in_capsule_data_size": 4096, 00:18:53.083 "max_io_size": 131072, 00:18:53.083 "io_unit_size": 131072, 00:18:53.083 "max_aq_depth": 128, 00:18:53.083 "num_shared_buffers": 511, 00:18:53.083 "buf_cache_size": 4294967295, 00:18:53.083 "dif_insert_or_strip": false, 00:18:53.083 "zcopy": false, 00:18:53.083 "c2h_success": false, 00:18:53.083 "sock_priority": 0, 00:18:53.083 "abort_timeout_sec": 1, 00:18:53.083 "ack_timeout": 0, 00:18:53.083 "data_wr_pool_size": 0 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "nvmf_create_subsystem", 00:18:53.083 "params": { 00:18:53.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.083 "allow_any_host": false, 00:18:53.083 "serial_number": "SPDK00000000000001", 00:18:53.083 "model_number": "SPDK bdev Controller", 00:18:53.083 "max_namespaces": 10, 00:18:53.083 "min_cntlid": 1, 00:18:53.083 "max_cntlid": 65519, 00:18:53.083 "ana_reporting": false 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "nvmf_subsystem_add_host", 00:18:53.083 "params": { 00:18:53.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.083 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.083 "psk": "/tmp/tmp.rVR2BEBXil" 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "nvmf_subsystem_add_ns", 00:18:53.083 "params": { 00:18:53.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.083 "namespace": { 00:18:53.083 "nsid": 1, 00:18:53.083 "bdev_name": "malloc0", 00:18:53.083 "nguid": "1E87FBC4E0584E98AFC71B9565527BD3", 00:18:53.083 "uuid": "1e87fbc4-e058-4e98-afc7-1b9565527bd3", 00:18:53.083 "no_auto_visible": false 00:18:53.083 } 00:18:53.083 } 00:18:53.083 }, 00:18:53.083 { 00:18:53.083 "method": "nvmf_subsystem_add_listener", 00:18:53.083 "params": { 00:18:53.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.083 "listen_address": { 00:18:53.083 "trtype": "TCP", 00:18:53.083 "adrfam": "IPv4", 00:18:53.083 "traddr": "10.0.0.2", 00:18:53.083 "trsvcid": "4420" 00:18:53.083 }, 00:18:53.083 "secure_channel": true 00:18:53.083 } 00:18:53.083 } 00:18:53.083 ] 00:18:53.083 } 00:18:53.083 ] 00:18:53.083 }' 00:18:53.083 18:44:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:53.344 18:44:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:53.344 "subsystems": [ 00:18:53.344 { 00:18:53.344 "subsystem": "keyring", 00:18:53.344 "config": [] 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "subsystem": "iobuf", 00:18:53.344 "config": [ 00:18:53.344 { 00:18:53.344 "method": "iobuf_set_options", 00:18:53.344 "params": { 00:18:53.344 "small_pool_count": 8192, 00:18:53.344 "large_pool_count": 1024, 00:18:53.344 "small_bufsize": 8192, 00:18:53.344 "large_bufsize": 135168 00:18:53.344 } 00:18:53.344 } 00:18:53.344 ] 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "subsystem": "sock", 00:18:53.344 "config": [ 00:18:53.344 { 00:18:53.344 "method": "sock_set_default_impl", 00:18:53.344 "params": { 00:18:53.344 "impl_name": "posix" 00:18:53.344 } 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "method": "sock_impl_set_options", 00:18:53.344 "params": { 00:18:53.344 "impl_name": "ssl", 00:18:53.344 "recv_buf_size": 4096, 00:18:53.344 "send_buf_size": 4096, 00:18:53.344 "enable_recv_pipe": true, 00:18:53.344 "enable_quickack": false, 00:18:53.344 "enable_placement_id": 0, 00:18:53.344 "enable_zerocopy_send_server": true, 00:18:53.344 "enable_zerocopy_send_client": false, 00:18:53.344 "zerocopy_threshold": 0, 00:18:53.344 "tls_version": 0, 00:18:53.344 "enable_ktls": false 00:18:53.344 } 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "method": "sock_impl_set_options", 00:18:53.344 "params": { 00:18:53.344 "impl_name": "posix", 00:18:53.344 "recv_buf_size": 2097152, 00:18:53.344 "send_buf_size": 2097152, 00:18:53.344 "enable_recv_pipe": true, 00:18:53.344 "enable_quickack": false, 00:18:53.344 "enable_placement_id": 0, 00:18:53.344 "enable_zerocopy_send_server": true, 00:18:53.344 "enable_zerocopy_send_client": false, 00:18:53.344 "zerocopy_threshold": 0, 00:18:53.344 "tls_version": 0, 00:18:53.344 "enable_ktls": false 00:18:53.344 } 00:18:53.344 } 00:18:53.344 ] 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "subsystem": "vmd", 00:18:53.344 "config": [] 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "subsystem": "accel", 00:18:53.344 "config": [ 00:18:53.344 { 00:18:53.344 "method": "accel_set_options", 00:18:53.344 "params": { 00:18:53.344 "small_cache_size": 128, 00:18:53.344 "large_cache_size": 16, 00:18:53.344 "task_count": 2048, 00:18:53.344 "sequence_count": 2048, 00:18:53.344 "buf_count": 2048 00:18:53.344 } 00:18:53.344 } 00:18:53.344 ] 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "subsystem": "bdev", 00:18:53.344 "config": [ 00:18:53.344 { 00:18:53.344 "method": "bdev_set_options", 00:18:53.344 "params": { 00:18:53.344 "bdev_io_pool_size": 65535, 00:18:53.344 "bdev_io_cache_size": 256, 00:18:53.344 "bdev_auto_examine": true, 00:18:53.344 "iobuf_small_cache_size": 128, 00:18:53.344 "iobuf_large_cache_size": 16 00:18:53.344 } 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "method": "bdev_raid_set_options", 00:18:53.344 "params": { 00:18:53.344 "process_window_size_kb": 1024 00:18:53.344 } 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "method": "bdev_iscsi_set_options", 00:18:53.344 "params": { 00:18:53.344 "timeout_sec": 30 00:18:53.344 } 00:18:53.344 }, 00:18:53.344 { 00:18:53.344 "method": "bdev_nvme_set_options", 00:18:53.344 "params": { 00:18:53.344 "action_on_timeout": "none", 00:18:53.344 "timeout_us": 0, 00:18:53.344 "timeout_admin_us": 0, 00:18:53.344 "keep_alive_timeout_ms": 10000, 00:18:53.344 "arbitration_burst": 0, 00:18:53.344 "low_priority_weight": 0, 00:18:53.344 "medium_priority_weight": 0, 00:18:53.344 "high_priority_weight": 0, 00:18:53.344 "nvme_adminq_poll_period_us": 10000, 00:18:53.344 "nvme_ioq_poll_period_us": 0, 00:18:53.344 "io_queue_requests": 512, 00:18:53.344 "delay_cmd_submit": true, 00:18:53.344 "transport_retry_count": 4, 00:18:53.344 "bdev_retry_count": 3, 00:18:53.344 "transport_ack_timeout": 0, 00:18:53.344 "ctrlr_loss_timeout_sec": 0, 00:18:53.344 "reconnect_delay_sec": 0, 00:18:53.344 "fast_io_fail_timeout_sec": 0, 00:18:53.344 "disable_auto_failback": false, 00:18:53.344 "generate_uuids": false, 00:18:53.344 "transport_tos": 0, 00:18:53.344 "nvme_error_stat": false, 00:18:53.344 "rdma_srq_size": 0, 00:18:53.344 "io_path_stat": false, 00:18:53.344 "allow_accel_sequence": false, 00:18:53.344 "rdma_max_cq_size": 0, 00:18:53.344 "rdma_cm_event_timeout_ms": 0, 00:18:53.344 "dhchap_digests": [ 00:18:53.344 "sha256", 00:18:53.345 "sha384", 00:18:53.345 "sha512" 00:18:53.345 ], 00:18:53.345 "dhchap_dhgroups": [ 00:18:53.345 "null", 00:18:53.345 "ffdhe2048", 00:18:53.345 "ffdhe3072", 00:18:53.345 "ffdhe4096", 00:18:53.345 "ffdhe6144", 00:18:53.345 "ffdhe8192" 00:18:53.345 ] 00:18:53.345 } 00:18:53.345 }, 00:18:53.345 { 00:18:53.345 "method": "bdev_nvme_attach_controller", 00:18:53.345 "params": { 00:18:53.345 "name": "TLSTEST", 00:18:53.345 "trtype": "TCP", 00:18:53.345 "adrfam": "IPv4", 00:18:53.345 "traddr": "10.0.0.2", 00:18:53.345 "trsvcid": "4420", 00:18:53.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.345 "prchk_reftag": false, 00:18:53.345 "prchk_guard": false, 00:18:53.345 "ctrlr_loss_timeout_sec": 0, 00:18:53.345 "reconnect_delay_sec": 0, 00:18:53.345 "fast_io_fail_timeout_sec": 0, 00:18:53.345 "psk": "/tmp/tmp.rVR2BEBXil", 00:18:53.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.345 "hdgst": false, 00:18:53.345 "ddgst": false 00:18:53.345 } 00:18:53.345 }, 00:18:53.345 { 00:18:53.345 "method": "bdev_nvme_set_hotplug", 00:18:53.345 "params": { 00:18:53.345 "period_us": 100000, 00:18:53.345 "enable": false 00:18:53.345 } 00:18:53.345 }, 00:18:53.345 { 00:18:53.345 "method": "bdev_wait_for_examine" 00:18:53.345 } 00:18:53.345 ] 00:18:53.345 }, 00:18:53.345 { 00:18:53.345 "subsystem": "nbd", 00:18:53.345 "config": [] 00:18:53.345 } 00:18:53.345 ] 00:18:53.345 }' 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1123090 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1123090 ']' 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1123090 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123090 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123090' 00:18:53.345 killing process with pid 1123090 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1123090 00:18:53.345 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.345 00:18:53.345 Latency(us) 00:18:53.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.345 =================================================================================================================== 00:18:53.345 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.345 [2024-07-15 18:44:09.945419] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:53.345 18:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1123090 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1122820 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1122820 ']' 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1122820 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1122820 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1122820' 00:18:53.604 killing process with pid 1122820 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1122820 00:18:53.604 [2024-07-15 18:44:10.176231] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:53.604 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1122820 00:18:53.864 18:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:53.864 18:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.864 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.864 18:44:10 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:53.864 "subsystems": [ 00:18:53.864 { 00:18:53.864 "subsystem": "keyring", 00:18:53.864 "config": [] 00:18:53.864 }, 00:18:53.864 { 00:18:53.864 "subsystem": "iobuf", 00:18:53.864 "config": [ 00:18:53.864 { 00:18:53.864 "method": "iobuf_set_options", 00:18:53.864 "params": { 00:18:53.864 "small_pool_count": 8192, 00:18:53.864 "large_pool_count": 1024, 00:18:53.864 "small_bufsize": 8192, 00:18:53.864 "large_bufsize": 135168 00:18:53.864 } 00:18:53.864 } 00:18:53.864 ] 00:18:53.864 }, 00:18:53.864 { 00:18:53.864 "subsystem": "sock", 00:18:53.864 "config": [ 00:18:53.864 { 00:18:53.864 "method": "sock_set_default_impl", 00:18:53.864 "params": { 00:18:53.865 "impl_name": "posix" 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "sock_impl_set_options", 00:18:53.865 "params": { 00:18:53.865 "impl_name": "ssl", 00:18:53.865 "recv_buf_size": 4096, 00:18:53.865 "send_buf_size": 4096, 00:18:53.865 "enable_recv_pipe": true, 00:18:53.865 "enable_quickack": false, 00:18:53.865 "enable_placement_id": 0, 00:18:53.865 "enable_zerocopy_send_server": true, 00:18:53.865 "enable_zerocopy_send_client": false, 00:18:53.865 "zerocopy_threshold": 0, 00:18:53.865 "tls_version": 0, 00:18:53.865 "enable_ktls": false 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "sock_impl_set_options", 00:18:53.865 "params": { 00:18:53.865 "impl_name": "posix", 00:18:53.865 "recv_buf_size": 2097152, 00:18:53.865 "send_buf_size": 2097152, 00:18:53.865 "enable_recv_pipe": true, 00:18:53.865 "enable_quickack": false, 00:18:53.865 "enable_placement_id": 0, 00:18:53.865 "enable_zerocopy_send_server": true, 00:18:53.865 "enable_zerocopy_send_client": false, 00:18:53.865 "zerocopy_threshold": 0, 00:18:53.865 "tls_version": 0, 00:18:53.865 "enable_ktls": false 00:18:53.865 } 00:18:53.865 } 00:18:53.865 ] 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "subsystem": "vmd", 00:18:53.865 "config": [] 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "subsystem": "accel", 00:18:53.865 "config": [ 00:18:53.865 { 00:18:53.865 "method": "accel_set_options", 00:18:53.865 "params": { 00:18:53.865 "small_cache_size": 128, 00:18:53.865 "large_cache_size": 16, 00:18:53.865 "task_count": 2048, 00:18:53.865 "sequence_count": 2048, 00:18:53.865 "buf_count": 2048 00:18:53.865 } 00:18:53.865 } 00:18:53.865 ] 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "subsystem": "bdev", 00:18:53.865 "config": [ 00:18:53.865 { 00:18:53.865 "method": "bdev_set_options", 00:18:53.865 "params": { 00:18:53.865 "bdev_io_pool_size": 65535, 00:18:53.865 "bdev_io_cache_size": 256, 00:18:53.865 "bdev_auto_examine": true, 00:18:53.865 "iobuf_small_cache_size": 128, 00:18:53.865 "iobuf_large_cache_size": 16 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "bdev_raid_set_options", 00:18:53.865 "params": { 00:18:53.865 "process_window_size_kb": 1024 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "bdev_iscsi_set_options", 00:18:53.865 "params": { 00:18:53.865 "timeout_sec": 30 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "bdev_nvme_set_options", 00:18:53.865 "params": { 00:18:53.865 "action_on_timeout": "none", 00:18:53.865 "timeout_us": 0, 00:18:53.865 "timeout_admin_us": 0, 00:18:53.865 "keep_alive_timeout_ms": 10000, 00:18:53.865 "arbitration_burst": 0, 00:18:53.865 "low_priority_weight": 0, 00:18:53.865 "medium_priority_weight": 0, 00:18:53.865 "high_priority_weight": 0, 00:18:53.865 "nvme_adminq_poll_period_us": 10000, 00:18:53.865 "nvme_ioq_poll_period_us": 0, 00:18:53.865 "io_queue_requests": 0, 00:18:53.865 "delay_cmd_submit": true, 00:18:53.865 "transport_retry_count": 4, 00:18:53.865 "bdev_retry_count": 3, 00:18:53.865 "transport_ack_timeout": 0, 00:18:53.865 "ctrlr_loss_timeout_sec": 0, 00:18:53.865 "reconnect_delay_sec": 0, 00:18:53.865 "fast_io_fail_timeout_sec": 0, 00:18:53.865 "disable_auto_failback": false, 00:18:53.865 "generate_uuids": false, 00:18:53.865 "transport_tos": 0, 00:18:53.865 "nvme_error_stat": false, 00:18:53.865 "rdma_srq_size": 0, 00:18:53.865 "io_path_stat": false, 00:18:53.865 "allow_accel_sequence": false, 00:18:53.865 "rdma_max_cq_size": 0, 00:18:53.865 "rdma_cm_event_timeout_ms": 0, 00:18:53.865 "dhchap_digests": [ 00:18:53.865 "sha256", 00:18:53.865 "sha384", 00:18:53.865 "sha512" 00:18:53.865 ], 00:18:53.865 "dhchap_dhgroups": [ 00:18:53.865 "null", 00:18:53.865 "ffdhe2048", 00:18:53.865 "ffdhe3072", 00:18:53.865 "ffdhe4096", 00:18:53.865 "ffdhe6144", 00:18:53.865 "ffdhe8192" 00:18:53.865 ] 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "bdev_nvme_set_hotplug", 00:18:53.865 "params": { 00:18:53.865 "period_us": 100000, 00:18:53.865 "enable": false 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "bdev_malloc_create", 00:18:53.865 "params": { 00:18:53.865 "name": "malloc0", 00:18:53.865 "num_blocks": 8192, 00:18:53.865 "block_size": 4096, 00:18:53.865 "physical_block_size": 4096, 00:18:53.865 "uuid": "1e87fbc4-e058-4e98-afc7-1b9565527bd3", 00:18:53.865 "optimal_io_boundary": 0 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "bdev_wait_for_examine" 00:18:53.865 } 00:18:53.865 ] 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "subsystem": "nbd", 00:18:53.865 "config": [] 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "subsystem": "scheduler", 00:18:53.865 "config": [ 00:18:53.865 { 00:18:53.865 "method": "framework_set_scheduler", 00:18:53.865 "params": { 00:18:53.865 "name": "static" 00:18:53.865 } 00:18:53.865 } 00:18:53.865 ] 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "subsystem": "nvmf", 00:18:53.865 "config": [ 00:18:53.865 { 00:18:53.865 "method": "nvmf_set_config", 00:18:53.865 "params": { 00:18:53.865 "discovery_filter": "match_any", 00:18:53.865 "admin_cmd_passthru": { 00:18:53.865 "identify_ctrlr": false 00:18:53.865 } 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "nvmf_set_max_subsystems", 00:18:53.865 "params": { 00:18:53.865 "max_subsystems": 1024 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.865 "method": "nvmf_set_crdt", 00:18:53.865 "params": { 00:18:53.865 "crdt1": 0, 00:18:53.865 "crdt2": 0, 00:18:53.865 "crdt3": 0 00:18:53.865 } 00:18:53.865 }, 00:18:53.865 { 00:18:53.866 "method": "nvmf_create_transport", 00:18:53.866 "params": { 00:18:53.866 "trtype": "TCP", 00:18:53.866 "max_queue_depth": 128, 00:18:53.866 "max_io_qpairs_per_ctrlr": 127, 00:18:53.866 "in_capsule_data_size": 4096, 00:18:53.866 "max_io_size": 131072, 00:18:53.866 "io_unit_size": 131072, 00:18:53.866 "max_aq_depth": 128, 00:18:53.866 "num_shared_buffers": 511, 00:18:53.866 "buf_cache_size": 4294967295, 00:18:53.866 "dif_insert_or_strip": false, 00:18:53.866 "zcopy": false, 00:18:53.866 "c2h_success": false, 00:18:53.866 "sock_priority": 0, 00:18:53.866 "abort_timeout_sec": 1, 00:18:53.866 "ack_timeout": 0, 00:18:53.866 "data_wr_pool_size": 0 00:18:53.866 } 00:18:53.866 }, 00:18:53.866 { 00:18:53.866 "method": "nvmf_create_subsystem", 00:18:53.866 "params": { 00:18:53.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.866 "allow_any_host": false, 00:18:53.866 "serial_number": "SPDK00000000000001", 00:18:53.866 "model_number": "SPDK bdev Controller", 00:18:53.866 "max_namespaces": 10, 00:18:53.866 "min_cntlid": 1, 00:18:53.866 "max_cntlid": 65519, 00:18:53.866 "ana_reporting": false 00:18:53.866 } 00:18:53.866 }, 00:18:53.866 { 00:18:53.866 "method": "nvmf_subsystem_add_host", 00:18:53.866 "params": { 00:18:53.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.866 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.866 "psk": "/tmp/tmp.rVR2BEBXil" 00:18:53.866 } 00:18:53.866 }, 00:18:53.866 { 00:18:53.866 "method": "nvmf_subsystem_add_ns", 00:18:53.866 "params": { 00:18:53.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.866 "namespace": { 00:18:53.866 "nsid": 1, 00:18:53.866 "bdev_name": "malloc0", 00:18:53.866 "nguid": "1E87FBC4E0584E98AFC71B9565527BD3", 00:18:53.866 "uuid": "1e87fbc4-e058-4e98-afc7-1b9565527bd3", 00:18:53.866 "no_auto_visible": false 00:18:53.866 } 00:18:53.866 } 00:18:53.866 }, 00:18:53.866 { 00:18:53.866 "method": "nvmf_subsystem_add_listener", 00:18:53.866 "params": { 00:18:53.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.866 "listen_address": { 00:18:53.866 "trtype": "TCP", 00:18:53.866 "adrfam": "IPv4", 00:18:53.866 "traddr": "10.0.0.2", 00:18:53.866 "trsvcid": "4420" 00:18:53.866 }, 00:18:53.866 "secure_channel": true 00:18:53.866 } 00:18:53.866 } 00:18:53.866 ] 00:18:53.866 } 00:18:53.866 ] 00:18:53.866 }' 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1123554 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1123554 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1123554 ']' 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.866 18:44:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.866 [2024-07-15 18:44:10.425955] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:53.866 [2024-07-15 18:44:10.426001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.866 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.866 [2024-07-15 18:44:10.482640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.866 [2024-07-15 18:44:10.561462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.866 [2024-07-15 18:44:10.561496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.866 [2024-07-15 18:44:10.561504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.866 [2024-07-15 18:44:10.561510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.866 [2024-07-15 18:44:10.561515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.866 [2024-07-15 18:44:10.561564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.125 [2024-07-15 18:44:10.762870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.125 [2024-07-15 18:44:10.778848] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:54.125 [2024-07-15 18:44:10.794896] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.125 [2024-07-15 18:44:10.806537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1123646 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1123646 /var/tmp/bdevperf.sock 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1123646 ']' 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.694 18:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:54.695 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.695 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.695 18:44:11 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:54.695 "subsystems": [ 00:18:54.695 { 00:18:54.695 "subsystem": "keyring", 00:18:54.695 "config": [] 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "subsystem": "iobuf", 00:18:54.695 "config": [ 00:18:54.695 { 00:18:54.695 "method": "iobuf_set_options", 00:18:54.695 "params": { 00:18:54.695 "small_pool_count": 8192, 00:18:54.695 "large_pool_count": 1024, 00:18:54.695 "small_bufsize": 8192, 00:18:54.695 "large_bufsize": 135168 00:18:54.695 } 00:18:54.695 } 00:18:54.695 ] 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "subsystem": "sock", 00:18:54.695 "config": [ 00:18:54.695 { 00:18:54.695 "method": "sock_set_default_impl", 00:18:54.695 "params": { 00:18:54.695 "impl_name": "posix" 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "sock_impl_set_options", 00:18:54.695 "params": { 00:18:54.695 "impl_name": "ssl", 00:18:54.695 "recv_buf_size": 4096, 00:18:54.695 "send_buf_size": 4096, 00:18:54.695 "enable_recv_pipe": true, 00:18:54.695 "enable_quickack": false, 00:18:54.695 "enable_placement_id": 0, 00:18:54.695 "enable_zerocopy_send_server": true, 00:18:54.695 "enable_zerocopy_send_client": false, 00:18:54.695 "zerocopy_threshold": 0, 00:18:54.695 "tls_version": 0, 00:18:54.695 "enable_ktls": false 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "sock_impl_set_options", 00:18:54.695 "params": { 00:18:54.695 "impl_name": "posix", 00:18:54.695 "recv_buf_size": 2097152, 00:18:54.695 "send_buf_size": 2097152, 00:18:54.695 "enable_recv_pipe": true, 00:18:54.695 "enable_quickack": false, 00:18:54.695 "enable_placement_id": 0, 00:18:54.695 "enable_zerocopy_send_server": true, 00:18:54.695 "enable_zerocopy_send_client": false, 00:18:54.695 "zerocopy_threshold": 0, 00:18:54.695 "tls_version": 0, 00:18:54.695 "enable_ktls": false 00:18:54.695 } 00:18:54.695 } 00:18:54.695 ] 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "subsystem": "vmd", 00:18:54.695 "config": [] 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "subsystem": "accel", 00:18:54.695 "config": [ 00:18:54.695 { 00:18:54.695 "method": "accel_set_options", 00:18:54.695 "params": { 00:18:54.695 "small_cache_size": 128, 00:18:54.695 "large_cache_size": 16, 00:18:54.695 "task_count": 2048, 00:18:54.695 "sequence_count": 2048, 00:18:54.695 "buf_count": 2048 00:18:54.695 } 00:18:54.695 } 00:18:54.695 ] 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "subsystem": "bdev", 00:18:54.695 "config": [ 00:18:54.695 { 00:18:54.695 "method": "bdev_set_options", 00:18:54.695 "params": { 00:18:54.695 "bdev_io_pool_size": 65535, 00:18:54.695 "bdev_io_cache_size": 256, 00:18:54.695 "bdev_auto_examine": true, 00:18:54.695 "iobuf_small_cache_size": 128, 00:18:54.695 "iobuf_large_cache_size": 16 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "bdev_raid_set_options", 00:18:54.695 "params": { 00:18:54.695 "process_window_size_kb": 1024 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "bdev_iscsi_set_options", 00:18:54.695 "params": { 00:18:54.695 "timeout_sec": 30 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "bdev_nvme_set_options", 00:18:54.695 "params": { 00:18:54.695 "action_on_timeout": "none", 00:18:54.695 "timeout_us": 0, 00:18:54.695 "timeout_admin_us": 0, 00:18:54.695 "keep_alive_timeout_ms": 10000, 00:18:54.695 "arbitration_burst": 0, 00:18:54.695 "low_priority_weight": 0, 00:18:54.695 "medium_priority_weight": 0, 00:18:54.695 "high_priority_weight": 0, 00:18:54.695 "nvme_adminq_poll_period_us": 10000, 00:18:54.695 "nvme_ioq_poll_period_us": 0, 00:18:54.695 "io_queue_requests": 512, 00:18:54.695 "delay_cmd_submit": true, 00:18:54.695 "transport_retry_count": 4, 00:18:54.695 "bdev_retry_count": 3, 00:18:54.695 "transport_ack_timeout": 0, 00:18:54.695 "ctrlr_loss_timeout_sec": 0, 00:18:54.695 "reconnect_delay_sec": 0, 00:18:54.695 "fast_io_fail_timeout_sec": 0, 00:18:54.695 "disable_auto_failback": false, 00:18:54.695 "generate_uuids": false, 00:18:54.695 "transport_tos": 0, 00:18:54.695 "nvme_error_stat": false, 00:18:54.695 "rdma_srq_size": 0, 00:18:54.695 "io_path_stat": false, 00:18:54.695 "allow_accel_sequence": false, 00:18:54.695 "rdma_max_cq_size": 0, 00:18:54.695 "rdma_cm_event_timeout_ms": 0, 00:18:54.695 "dhchap_digests": [ 00:18:54.695 "sha256", 00:18:54.695 "sha384", 00:18:54.695 "sha512" 00:18:54.695 ], 00:18:54.695 "dhchap_dhgroups": [ 00:18:54.695 "null", 00:18:54.695 "ffdhe2048", 00:18:54.695 "ffdhe3072", 00:18:54.695 "ffdhe4096", 00:18:54.695 "ffdhe6144", 00:18:54.695 "ffdhe8192" 00:18:54.695 ] 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "bdev_nvme_attach_controller", 00:18:54.695 "params": { 00:18:54.695 "name": "TLSTEST", 00:18:54.695 "trtype": "TCP", 00:18:54.695 "adrfam": "IPv4", 00:18:54.695 "traddr": "10.0.0.2", 00:18:54.695 "trsvcid": "4420", 00:18:54.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.695 "prchk_reftag": false, 00:18:54.695 "prchk_guard": false, 00:18:54.695 "ctrlr_loss_timeout_sec": 0, 00:18:54.695 "reconnect_delay_sec": 0, 00:18:54.695 "fast_io_fail_timeout_sec": 0, 00:18:54.695 "psk": "/tmp/tmp.rVR2BEBXil", 00:18:54.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.695 "hdgst": false, 00:18:54.695 "ddgst": false 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "bdev_nvme_set_hotplug", 00:18:54.695 "params": { 00:18:54.695 "period_us": 100000, 00:18:54.695 "enable": false 00:18:54.695 } 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "method": "bdev_wait_for_examine" 00:18:54.695 } 00:18:54.695 ] 00:18:54.695 }, 00:18:54.695 { 00:18:54.695 "subsystem": "nbd", 00:18:54.695 "config": [] 00:18:54.695 } 00:18:54.695 ] 00:18:54.695 }' 00:18:54.695 18:44:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.695 [2024-07-15 18:44:11.305523] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:18:54.695 [2024-07-15 18:44:11.305568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123646 ] 00:18:54.695 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.695 [2024-07-15 18:44:11.356049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.955 [2024-07-15 18:44:11.434968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.955 [2024-07-15 18:44:11.577695] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.955 [2024-07-15 18:44:11.577777] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:55.524 18:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.524 18:44:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:55.524 18:44:12 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:55.524 Running I/O for 10 seconds... 00:19:07.765 00:19:07.765 Latency(us) 00:19:07.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.765 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:07.765 Verification LBA range: start 0x0 length 0x2000 00:19:07.765 TLSTESTn1 : 10.02 5405.05 21.11 0.00 0.00 23643.46 4758.48 30773.43 00:19:07.765 =================================================================================================================== 00:19:07.765 Total : 5405.05 21.11 0.00 0.00 23643.46 4758.48 30773.43 00:19:07.765 0 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1123646 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1123646 ']' 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1123646 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123646 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123646' 00:19:07.765 killing process with pid 1123646 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1123646 00:19:07.765 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.765 00:19:07.765 Latency(us) 00:19:07.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.765 =================================================================================================================== 00:19:07.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.765 [2024-07-15 18:44:22.312760] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1123646 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1123554 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1123554 ']' 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1123554 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123554 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123554' 00:19:07.765 killing process with pid 1123554 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1123554 00:19:07.765 [2024-07-15 18:44:22.538999] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1123554 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1125545 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1125545 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1125545 ']' 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.765 18:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.765 [2024-07-15 18:44:22.784357] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:07.765 [2024-07-15 18:44:22.784403] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.765 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.765 [2024-07-15 18:44:22.842010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.765 [2024-07-15 18:44:22.913885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.765 [2024-07-15 18:44:22.913926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.765 [2024-07-15 18:44:22.913933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.765 [2024-07-15 18:44:22.913938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.765 [2024-07-15 18:44:22.913943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.765 [2024-07-15 18:44:22.913977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.rVR2BEBXil 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rVR2BEBXil 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:07.765 [2024-07-15 18:44:23.777267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:07.765 18:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:07.765 [2024-07-15 18:44:24.122126] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.765 [2024-07-15 18:44:24.122327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.765 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:07.765 malloc0 00:19:07.765 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rVR2BEBXil 00:19:08.024 [2024-07-15 18:44:24.631854] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1125899 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1125899 /var/tmp/bdevperf.sock 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1125899 ']' 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.024 18:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.025 [2024-07-15 18:44:24.691331] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:08.025 [2024-07-15 18:44:24.691377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125899 ] 00:19:08.025 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.284 [2024-07-15 18:44:24.744840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.284 [2024-07-15 18:44:24.816966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.852 18:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.852 18:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:08.852 18:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rVR2BEBXil 00:19:09.111 18:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:09.111 [2024-07-15 18:44:25.792762] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.370 nvme0n1 00:19:09.370 18:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.370 Running I/O for 1 seconds... 00:19:10.307 00:19:10.307 Latency(us) 00:19:10.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.307 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:10.307 Verification LBA range: start 0x0 length 0x2000 00:19:10.307 nvme0n1 : 1.02 5334.45 20.84 0.00 0.00 23782.20 4673.00 53340.61 00:19:10.307 =================================================================================================================== 00:19:10.308 Total : 5334.45 20.84 0.00 0.00 23782.20 4673.00 53340.61 00:19:10.308 0 00:19:10.308 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1125899 00:19:10.308 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1125899 ']' 00:19:10.308 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1125899 00:19:10.308 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:10.308 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125899 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125899' 00:19:10.567 killing process with pid 1125899 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1125899 00:19:10.567 Received shutdown signal, test time was about 1.000000 seconds 00:19:10.567 00:19:10.567 Latency(us) 00:19:10.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.567 =================================================================================================================== 00:19:10.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1125899 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1125545 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1125545 ']' 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1125545 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.567 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125545 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125545' 00:19:10.826 killing process with pid 1125545 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1125545 00:19:10.826 [2024-07-15 18:44:27.277771] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1125545 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1126375 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1126375 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1126375 ']' 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.826 18:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.826 [2024-07-15 18:44:27.520631] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:10.826 [2024-07-15 18:44:27.520677] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.086 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.086 [2024-07-15 18:44:27.577313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.086 [2024-07-15 18:44:27.656016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.086 [2024-07-15 18:44:27.656048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.086 [2024-07-15 18:44:27.656056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.086 [2024-07-15 18:44:27.656062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.086 [2024-07-15 18:44:27.656067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.086 [2024-07-15 18:44:27.656082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.655 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.655 [2024-07-15 18:44:28.362105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.915 malloc0 00:19:11.915 [2024-07-15 18:44:28.390428] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.915 [2024-07-15 18:44:28.390611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1126544 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1126544 /var/tmp/bdevperf.sock 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1126544 ']' 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:11.915 18:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.915 [2024-07-15 18:44:28.464445] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:11.915 [2024-07-15 18:44:28.464482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126544 ] 00:19:11.915 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.915 [2024-07-15 18:44:28.519252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.915 [2024-07-15 18:44:28.598248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.853 18:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.853 18:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:12.853 18:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rVR2BEBXil 00:19:12.853 18:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:13.112 [2024-07-15 18:44:29.566096] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.112 nvme0n1 00:19:13.112 18:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:13.112 Running I/O for 1 seconds... 00:19:14.493 00:19:14.493 Latency(us) 00:19:14.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.493 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:14.493 Verification LBA range: start 0x0 length 0x2000 00:19:14.493 nvme0n1 : 1.03 2584.44 10.10 0.00 0.00 49045.97 7265.95 80238.86 00:19:14.493 =================================================================================================================== 00:19:14.493 Total : 2584.44 10.10 0.00 0.00 49045.97 7265.95 80238.86 00:19:14.493 0 00:19:14.493 18:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:14.493 18:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.493 18:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.493 18:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.493 18:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:14.493 "subsystems": [ 00:19:14.493 { 00:19:14.493 "subsystem": "keyring", 00:19:14.493 "config": [ 00:19:14.493 { 00:19:14.493 "method": "keyring_file_add_key", 00:19:14.493 "params": { 00:19:14.493 "name": "key0", 00:19:14.493 "path": "/tmp/tmp.rVR2BEBXil" 00:19:14.493 } 00:19:14.493 } 00:19:14.493 ] 00:19:14.493 }, 00:19:14.493 { 00:19:14.493 "subsystem": "iobuf", 00:19:14.493 "config": [ 00:19:14.493 { 00:19:14.493 "method": "iobuf_set_options", 00:19:14.493 "params": { 00:19:14.493 "small_pool_count": 8192, 00:19:14.493 "large_pool_count": 1024, 00:19:14.493 "small_bufsize": 8192, 00:19:14.493 "large_bufsize": 135168 00:19:14.493 } 00:19:14.493 } 00:19:14.493 ] 00:19:14.493 }, 00:19:14.493 { 00:19:14.494 "subsystem": "sock", 00:19:14.494 "config": [ 00:19:14.494 { 00:19:14.494 "method": "sock_set_default_impl", 00:19:14.494 "params": { 00:19:14.494 "impl_name": "posix" 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "sock_impl_set_options", 00:19:14.494 "params": { 00:19:14.494 "impl_name": "ssl", 00:19:14.494 "recv_buf_size": 4096, 00:19:14.494 "send_buf_size": 4096, 00:19:14.494 "enable_recv_pipe": true, 00:19:14.494 "enable_quickack": false, 00:19:14.494 "enable_placement_id": 0, 00:19:14.494 "enable_zerocopy_send_server": true, 00:19:14.494 "enable_zerocopy_send_client": false, 00:19:14.494 "zerocopy_threshold": 0, 00:19:14.494 "tls_version": 0, 00:19:14.494 "enable_ktls": false 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "sock_impl_set_options", 00:19:14.494 "params": { 00:19:14.494 "impl_name": "posix", 00:19:14.494 "recv_buf_size": 2097152, 00:19:14.494 "send_buf_size": 2097152, 00:19:14.494 "enable_recv_pipe": true, 00:19:14.494 "enable_quickack": false, 00:19:14.494 "enable_placement_id": 0, 00:19:14.494 "enable_zerocopy_send_server": true, 00:19:14.494 "enable_zerocopy_send_client": false, 00:19:14.494 "zerocopy_threshold": 0, 00:19:14.494 "tls_version": 0, 00:19:14.494 "enable_ktls": false 00:19:14.494 } 00:19:14.494 } 00:19:14.494 ] 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "subsystem": "vmd", 00:19:14.494 "config": [] 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "subsystem": "accel", 00:19:14.494 "config": [ 00:19:14.494 { 00:19:14.494 "method": "accel_set_options", 00:19:14.494 "params": { 00:19:14.494 "small_cache_size": 128, 00:19:14.494 "large_cache_size": 16, 00:19:14.494 "task_count": 2048, 00:19:14.494 "sequence_count": 2048, 00:19:14.494 "buf_count": 2048 00:19:14.494 } 00:19:14.494 } 00:19:14.494 ] 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "subsystem": "bdev", 00:19:14.494 "config": [ 00:19:14.494 { 00:19:14.494 "method": "bdev_set_options", 00:19:14.494 "params": { 00:19:14.494 "bdev_io_pool_size": 65535, 00:19:14.494 "bdev_io_cache_size": 256, 00:19:14.494 "bdev_auto_examine": true, 00:19:14.494 "iobuf_small_cache_size": 128, 00:19:14.494 "iobuf_large_cache_size": 16 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "bdev_raid_set_options", 00:19:14.494 "params": { 00:19:14.494 "process_window_size_kb": 1024 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "bdev_iscsi_set_options", 00:19:14.494 "params": { 00:19:14.494 "timeout_sec": 30 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "bdev_nvme_set_options", 00:19:14.494 "params": { 00:19:14.494 "action_on_timeout": "none", 00:19:14.494 "timeout_us": 0, 00:19:14.494 "timeout_admin_us": 0, 00:19:14.494 "keep_alive_timeout_ms": 10000, 00:19:14.494 "arbitration_burst": 0, 00:19:14.494 "low_priority_weight": 0, 00:19:14.494 "medium_priority_weight": 0, 00:19:14.494 "high_priority_weight": 0, 00:19:14.494 "nvme_adminq_poll_period_us": 10000, 00:19:14.494 "nvme_ioq_poll_period_us": 0, 00:19:14.494 "io_queue_requests": 0, 00:19:14.494 "delay_cmd_submit": true, 00:19:14.494 "transport_retry_count": 4, 00:19:14.494 "bdev_retry_count": 3, 00:19:14.494 "transport_ack_timeout": 0, 00:19:14.494 "ctrlr_loss_timeout_sec": 0, 00:19:14.494 "reconnect_delay_sec": 0, 00:19:14.494 "fast_io_fail_timeout_sec": 0, 00:19:14.494 "disable_auto_failback": false, 00:19:14.494 "generate_uuids": false, 00:19:14.494 "transport_tos": 0, 00:19:14.494 "nvme_error_stat": false, 00:19:14.494 "rdma_srq_size": 0, 00:19:14.494 "io_path_stat": false, 00:19:14.494 "allow_accel_sequence": false, 00:19:14.494 "rdma_max_cq_size": 0, 00:19:14.494 "rdma_cm_event_timeout_ms": 0, 00:19:14.494 "dhchap_digests": [ 00:19:14.494 "sha256", 00:19:14.494 "sha384", 00:19:14.494 "sha512" 00:19:14.494 ], 00:19:14.494 "dhchap_dhgroups": [ 00:19:14.494 "null", 00:19:14.494 "ffdhe2048", 00:19:14.494 "ffdhe3072", 00:19:14.494 "ffdhe4096", 00:19:14.494 "ffdhe6144", 00:19:14.494 "ffdhe8192" 00:19:14.494 ] 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "bdev_nvme_set_hotplug", 00:19:14.494 "params": { 00:19:14.494 "period_us": 100000, 00:19:14.494 "enable": false 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "bdev_malloc_create", 00:19:14.494 "params": { 00:19:14.494 "name": "malloc0", 00:19:14.494 "num_blocks": 8192, 00:19:14.494 "block_size": 4096, 00:19:14.494 "physical_block_size": 4096, 00:19:14.494 "uuid": "3d801fac-7bf6-45e2-b900-b140a1fb3d14", 00:19:14.494 "optimal_io_boundary": 0 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "bdev_wait_for_examine" 00:19:14.494 } 00:19:14.494 ] 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "subsystem": "nbd", 00:19:14.494 "config": [] 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "subsystem": "scheduler", 00:19:14.494 "config": [ 00:19:14.494 { 00:19:14.494 "method": "framework_set_scheduler", 00:19:14.494 "params": { 00:19:14.494 "name": "static" 00:19:14.494 } 00:19:14.494 } 00:19:14.494 ] 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "subsystem": "nvmf", 00:19:14.494 "config": [ 00:19:14.494 { 00:19:14.494 "method": "nvmf_set_config", 00:19:14.494 "params": { 00:19:14.494 "discovery_filter": "match_any", 00:19:14.494 "admin_cmd_passthru": { 00:19:14.494 "identify_ctrlr": false 00:19:14.494 } 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "nvmf_set_max_subsystems", 00:19:14.494 "params": { 00:19:14.494 "max_subsystems": 1024 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "nvmf_set_crdt", 00:19:14.494 "params": { 00:19:14.494 "crdt1": 0, 00:19:14.494 "crdt2": 0, 00:19:14.494 "crdt3": 0 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "nvmf_create_transport", 00:19:14.494 "params": { 00:19:14.494 "trtype": "TCP", 00:19:14.494 "max_queue_depth": 128, 00:19:14.494 "max_io_qpairs_per_ctrlr": 127, 00:19:14.494 "in_capsule_data_size": 4096, 00:19:14.494 "max_io_size": 131072, 00:19:14.494 "io_unit_size": 131072, 00:19:14.494 "max_aq_depth": 128, 00:19:14.494 "num_shared_buffers": 511, 00:19:14.494 "buf_cache_size": 4294967295, 00:19:14.494 "dif_insert_or_strip": false, 00:19:14.494 "zcopy": false, 00:19:14.494 "c2h_success": false, 00:19:14.494 "sock_priority": 0, 00:19:14.494 "abort_timeout_sec": 1, 00:19:14.494 "ack_timeout": 0, 00:19:14.494 "data_wr_pool_size": 0 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "nvmf_create_subsystem", 00:19:14.494 "params": { 00:19:14.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.494 "allow_any_host": false, 00:19:14.494 "serial_number": "00000000000000000000", 00:19:14.494 "model_number": "SPDK bdev Controller", 00:19:14.494 "max_namespaces": 32, 00:19:14.494 "min_cntlid": 1, 00:19:14.494 "max_cntlid": 65519, 00:19:14.494 "ana_reporting": false 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "nvmf_subsystem_add_host", 00:19:14.494 "params": { 00:19:14.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.494 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.494 "psk": "key0" 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "nvmf_subsystem_add_ns", 00:19:14.494 "params": { 00:19:14.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.494 "namespace": { 00:19:14.494 "nsid": 1, 00:19:14.494 "bdev_name": "malloc0", 00:19:14.494 "nguid": "3D801FAC7BF645E2B900B140A1FB3D14", 00:19:14.494 "uuid": "3d801fac-7bf6-45e2-b900-b140a1fb3d14", 00:19:14.494 "no_auto_visible": false 00:19:14.494 } 00:19:14.494 } 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "method": "nvmf_subsystem_add_listener", 00:19:14.494 "params": { 00:19:14.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.494 "listen_address": { 00:19:14.494 "trtype": "TCP", 00:19:14.494 "adrfam": "IPv4", 00:19:14.494 "traddr": "10.0.0.2", 00:19:14.494 "trsvcid": "4420" 00:19:14.494 }, 00:19:14.494 "secure_channel": true 00:19:14.494 } 00:19:14.494 } 00:19:14.494 ] 00:19:14.494 } 00:19:14.494 ] 00:19:14.494 }' 00:19:14.494 18:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:14.494 18:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:14.494 "subsystems": [ 00:19:14.494 { 00:19:14.494 "subsystem": "keyring", 00:19:14.494 "config": [ 00:19:14.494 { 00:19:14.494 "method": "keyring_file_add_key", 00:19:14.494 "params": { 00:19:14.494 "name": "key0", 00:19:14.494 "path": "/tmp/tmp.rVR2BEBXil" 00:19:14.494 } 00:19:14.494 } 00:19:14.494 ] 00:19:14.494 }, 00:19:14.494 { 00:19:14.494 "subsystem": "iobuf", 00:19:14.494 "config": [ 00:19:14.494 { 00:19:14.494 "method": "iobuf_set_options", 00:19:14.494 "params": { 00:19:14.494 "small_pool_count": 8192, 00:19:14.494 "large_pool_count": 1024, 00:19:14.494 "small_bufsize": 8192, 00:19:14.494 "large_bufsize": 135168 00:19:14.494 } 00:19:14.495 } 00:19:14.495 ] 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "subsystem": "sock", 00:19:14.495 "config": [ 00:19:14.495 { 00:19:14.495 "method": "sock_set_default_impl", 00:19:14.495 "params": { 00:19:14.495 "impl_name": "posix" 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "sock_impl_set_options", 00:19:14.495 "params": { 00:19:14.495 "impl_name": "ssl", 00:19:14.495 "recv_buf_size": 4096, 00:19:14.495 "send_buf_size": 4096, 00:19:14.495 "enable_recv_pipe": true, 00:19:14.495 "enable_quickack": false, 00:19:14.495 "enable_placement_id": 0, 00:19:14.495 "enable_zerocopy_send_server": true, 00:19:14.495 "enable_zerocopy_send_client": false, 00:19:14.495 "zerocopy_threshold": 0, 00:19:14.495 "tls_version": 0, 00:19:14.495 "enable_ktls": false 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "sock_impl_set_options", 00:19:14.495 "params": { 00:19:14.495 "impl_name": "posix", 00:19:14.495 "recv_buf_size": 2097152, 00:19:14.495 "send_buf_size": 2097152, 00:19:14.495 "enable_recv_pipe": true, 00:19:14.495 "enable_quickack": false, 00:19:14.495 "enable_placement_id": 0, 00:19:14.495 "enable_zerocopy_send_server": true, 00:19:14.495 "enable_zerocopy_send_client": false, 00:19:14.495 "zerocopy_threshold": 0, 00:19:14.495 "tls_version": 0, 00:19:14.495 "enable_ktls": false 00:19:14.495 } 00:19:14.495 } 00:19:14.495 ] 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "subsystem": "vmd", 00:19:14.495 "config": [] 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "subsystem": "accel", 00:19:14.495 "config": [ 00:19:14.495 { 00:19:14.495 "method": "accel_set_options", 00:19:14.495 "params": { 00:19:14.495 "small_cache_size": 128, 00:19:14.495 "large_cache_size": 16, 00:19:14.495 "task_count": 2048, 00:19:14.495 "sequence_count": 2048, 00:19:14.495 "buf_count": 2048 00:19:14.495 } 00:19:14.495 } 00:19:14.495 ] 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "subsystem": "bdev", 00:19:14.495 "config": [ 00:19:14.495 { 00:19:14.495 "method": "bdev_set_options", 00:19:14.495 "params": { 00:19:14.495 "bdev_io_pool_size": 65535, 00:19:14.495 "bdev_io_cache_size": 256, 00:19:14.495 "bdev_auto_examine": true, 00:19:14.495 "iobuf_small_cache_size": 128, 00:19:14.495 "iobuf_large_cache_size": 16 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "bdev_raid_set_options", 00:19:14.495 "params": { 00:19:14.495 "process_window_size_kb": 1024 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "bdev_iscsi_set_options", 00:19:14.495 "params": { 00:19:14.495 "timeout_sec": 30 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "bdev_nvme_set_options", 00:19:14.495 "params": { 00:19:14.495 "action_on_timeout": "none", 00:19:14.495 "timeout_us": 0, 00:19:14.495 "timeout_admin_us": 0, 00:19:14.495 "keep_alive_timeout_ms": 10000, 00:19:14.495 "arbitration_burst": 0, 00:19:14.495 "low_priority_weight": 0, 00:19:14.495 "medium_priority_weight": 0, 00:19:14.495 "high_priority_weight": 0, 00:19:14.495 "nvme_adminq_poll_period_us": 10000, 00:19:14.495 "nvme_ioq_poll_period_us": 0, 00:19:14.495 "io_queue_requests": 512, 00:19:14.495 "delay_cmd_submit": true, 00:19:14.495 "transport_retry_count": 4, 00:19:14.495 "bdev_retry_count": 3, 00:19:14.495 "transport_ack_timeout": 0, 00:19:14.495 "ctrlr_loss_timeout_sec": 0, 00:19:14.495 "reconnect_delay_sec": 0, 00:19:14.495 "fast_io_fail_timeout_sec": 0, 00:19:14.495 "disable_auto_failback": false, 00:19:14.495 "generate_uuids": false, 00:19:14.495 "transport_tos": 0, 00:19:14.495 "nvme_error_stat": false, 00:19:14.495 "rdma_srq_size": 0, 00:19:14.495 "io_path_stat": false, 00:19:14.495 "allow_accel_sequence": false, 00:19:14.495 "rdma_max_cq_size": 0, 00:19:14.495 "rdma_cm_event_timeout_ms": 0, 00:19:14.495 "dhchap_digests": [ 00:19:14.495 "sha256", 00:19:14.495 "sha384", 00:19:14.495 "sha512" 00:19:14.495 ], 00:19:14.495 "dhchap_dhgroups": [ 00:19:14.495 "null", 00:19:14.495 "ffdhe2048", 00:19:14.495 "ffdhe3072", 00:19:14.495 "ffdhe4096", 00:19:14.495 "ffdhe6144", 00:19:14.495 "ffdhe8192" 00:19:14.495 ] 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "bdev_nvme_attach_controller", 00:19:14.495 "params": { 00:19:14.495 "name": "nvme0", 00:19:14.495 "trtype": "TCP", 00:19:14.495 "adrfam": "IPv4", 00:19:14.495 "traddr": "10.0.0.2", 00:19:14.495 "trsvcid": "4420", 00:19:14.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.495 "prchk_reftag": false, 00:19:14.495 "prchk_guard": false, 00:19:14.495 "ctrlr_loss_timeout_sec": 0, 00:19:14.495 "reconnect_delay_sec": 0, 00:19:14.495 "fast_io_fail_timeout_sec": 0, 00:19:14.495 "psk": "key0", 00:19:14.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.495 "hdgst": false, 00:19:14.495 "ddgst": false 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "bdev_nvme_set_hotplug", 00:19:14.495 "params": { 00:19:14.495 "period_us": 100000, 00:19:14.495 "enable": false 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "bdev_enable_histogram", 00:19:14.495 "params": { 00:19:14.495 "name": "nvme0n1", 00:19:14.495 "enable": true 00:19:14.495 } 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "method": "bdev_wait_for_examine" 00:19:14.495 } 00:19:14.495 ] 00:19:14.495 }, 00:19:14.495 { 00:19:14.495 "subsystem": "nbd", 00:19:14.495 "config": [] 00:19:14.495 } 00:19:14.495 ] 00:19:14.495 }' 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1126544 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1126544 ']' 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1126544 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1126544 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1126544' 00:19:14.495 killing process with pid 1126544 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1126544 00:19:14.495 Received shutdown signal, test time was about 1.000000 seconds 00:19:14.495 00:19:14.495 Latency(us) 00:19:14.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.495 =================================================================================================================== 00:19:14.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.495 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1126544 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1126375 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1126375 ']' 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1126375 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1126375 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1126375' 00:19:14.755 killing process with pid 1126375 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1126375 00:19:14.755 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1126375 00:19:15.014 18:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:15.014 18:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.014 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.014 18:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:15.014 "subsystems": [ 00:19:15.014 { 00:19:15.014 "subsystem": "keyring", 00:19:15.014 "config": [ 00:19:15.014 { 00:19:15.014 "method": "keyring_file_add_key", 00:19:15.014 "params": { 00:19:15.014 "name": "key0", 00:19:15.014 "path": "/tmp/tmp.rVR2BEBXil" 00:19:15.014 } 00:19:15.014 } 00:19:15.014 ] 00:19:15.014 }, 00:19:15.014 { 00:19:15.014 "subsystem": "iobuf", 00:19:15.014 "config": [ 00:19:15.014 { 00:19:15.014 "method": "iobuf_set_options", 00:19:15.014 "params": { 00:19:15.014 "small_pool_count": 8192, 00:19:15.014 "large_pool_count": 1024, 00:19:15.014 "small_bufsize": 8192, 00:19:15.014 "large_bufsize": 135168 00:19:15.014 } 00:19:15.014 } 00:19:15.014 ] 00:19:15.014 }, 00:19:15.014 { 00:19:15.014 "subsystem": "sock", 00:19:15.014 "config": [ 00:19:15.014 { 00:19:15.014 "method": "sock_set_default_impl", 00:19:15.014 "params": { 00:19:15.014 "impl_name": "posix" 00:19:15.014 } 00:19:15.014 }, 00:19:15.014 { 00:19:15.014 "method": "sock_impl_set_options", 00:19:15.014 "params": { 00:19:15.014 "impl_name": "ssl", 00:19:15.014 "recv_buf_size": 4096, 00:19:15.014 "send_buf_size": 4096, 00:19:15.014 "enable_recv_pipe": true, 00:19:15.014 "enable_quickack": false, 00:19:15.014 "enable_placement_id": 0, 00:19:15.014 "enable_zerocopy_send_server": true, 00:19:15.014 "enable_zerocopy_send_client": false, 00:19:15.014 "zerocopy_threshold": 0, 00:19:15.014 "tls_version": 0, 00:19:15.014 "enable_ktls": false 00:19:15.014 } 00:19:15.014 }, 00:19:15.014 { 00:19:15.014 "method": "sock_impl_set_options", 00:19:15.014 "params": { 00:19:15.014 "impl_name": "posix", 00:19:15.015 "recv_buf_size": 2097152, 00:19:15.015 "send_buf_size": 2097152, 00:19:15.015 "enable_recv_pipe": true, 00:19:15.015 "enable_quickack": false, 00:19:15.015 "enable_placement_id": 0, 00:19:15.015 "enable_zerocopy_send_server": true, 00:19:15.015 "enable_zerocopy_send_client": false, 00:19:15.015 "zerocopy_threshold": 0, 00:19:15.015 "tls_version": 0, 00:19:15.015 "enable_ktls": false 00:19:15.015 } 00:19:15.015 } 00:19:15.015 ] 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "subsystem": "vmd", 00:19:15.015 "config": [] 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "subsystem": "accel", 00:19:15.015 "config": [ 00:19:15.015 { 00:19:15.015 "method": "accel_set_options", 00:19:15.015 "params": { 00:19:15.015 "small_cache_size": 128, 00:19:15.015 "large_cache_size": 16, 00:19:15.015 "task_count": 2048, 00:19:15.015 "sequence_count": 2048, 00:19:15.015 "buf_count": 2048 00:19:15.015 } 00:19:15.015 } 00:19:15.015 ] 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "subsystem": "bdev", 00:19:15.015 "config": [ 00:19:15.015 { 00:19:15.015 "method": "bdev_set_options", 00:19:15.015 "params": { 00:19:15.015 "bdev_io_pool_size": 65535, 00:19:15.015 "bdev_io_cache_size": 256, 00:19:15.015 "bdev_auto_examine": true, 00:19:15.015 "iobuf_small_cache_size": 128, 00:19:15.015 "iobuf_large_cache_size": 16 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "bdev_raid_set_options", 00:19:15.015 "params": { 00:19:15.015 "process_window_size_kb": 1024 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "bdev_iscsi_set_options", 00:19:15.015 "params": { 00:19:15.015 "timeout_sec": 30 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "bdev_nvme_set_options", 00:19:15.015 "params": { 00:19:15.015 "action_on_timeout": "none", 00:19:15.015 "timeout_us": 0, 00:19:15.015 "timeout_admin_us": 0, 00:19:15.015 "keep_alive_timeout_ms": 10000, 00:19:15.015 "arbitration_burst": 0, 00:19:15.015 "low_priority_weight": 0, 00:19:15.015 "medium_priority_weight": 0, 00:19:15.015 "high_priority_weight": 0, 00:19:15.015 "nvme_adminq_poll_period_us": 10000, 00:19:15.015 "nvme_ioq_poll_period_us": 0, 00:19:15.015 "io_queue_requests": 0, 00:19:15.015 "delay_cmd_submit": true, 00:19:15.015 "transport_retry_count": 4, 00:19:15.015 "bdev_retry_count": 3, 00:19:15.015 "transport_ack_timeout": 0, 00:19:15.015 "ctrlr_loss_timeout_sec": 0, 00:19:15.015 "reconnect_delay_sec": 0, 00:19:15.015 "fast_io_fail_timeout_sec": 0, 00:19:15.015 "disable_auto_failback": false, 00:19:15.015 "generate_uuids": false, 00:19:15.015 "transport_tos": 0, 00:19:15.015 "nvme_error_stat": false, 00:19:15.015 "rdma_srq_size": 0, 00:19:15.015 "io_path_stat": false, 00:19:15.015 "allow_accel_sequence": false, 00:19:15.015 "rdma_max_cq_size": 0, 00:19:15.015 "rdma_cm_event_timeout_ms": 0, 00:19:15.015 "dhchap_digests": [ 00:19:15.015 "sha256", 00:19:15.015 "sha384", 00:19:15.015 "sha512" 00:19:15.015 ], 00:19:15.015 "dhchap_dhgroups": [ 00:19:15.015 "null", 00:19:15.015 "ffdhe2048", 00:19:15.015 "ffdhe3072", 00:19:15.015 "ffdhe4096", 00:19:15.015 "ffdhe6144", 00:19:15.015 "ffdhe8192" 00:19:15.015 ] 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "bdev_nvme_set_hotplug", 00:19:15.015 "params": { 00:19:15.015 "period_us": 100000, 00:19:15.015 "enable": false 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "bdev_malloc_create", 00:19:15.015 "params": { 00:19:15.015 "name": "malloc0", 00:19:15.015 "num_blocks": 8192, 00:19:15.015 "block_size": 4096, 00:19:15.015 "physical_block_size": 4096, 00:19:15.015 "uuid": "3d801fac-7bf6-45e2-b900-b140a1fb3d14", 00:19:15.015 "optimal_io_boundary": 0 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "bdev_wait_for_examine" 00:19:15.015 } 00:19:15.015 ] 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "subsystem": "nbd", 00:19:15.015 "config": [] 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "subsystem": "scheduler", 00:19:15.015 "config": [ 00:19:15.015 { 00:19:15.015 "method": "framework_set_scheduler", 00:19:15.015 "params": { 00:19:15.015 "name": "static" 00:19:15.015 } 00:19:15.015 } 00:19:15.015 ] 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "subsystem": "nvmf", 00:19:15.015 "config": [ 00:19:15.015 { 00:19:15.015 "method": "nvmf_set_config", 00:19:15.015 "params": { 00:19:15.015 "discovery_filter": "match_any", 00:19:15.015 "admin_cmd_passthru": { 00:19:15.015 "identify_ctrlr": false 00:19:15.015 } 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "nvmf_set_max_subsystems", 00:19:15.015 "params": { 00:19:15.015 "max_subsystems": 1024 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "nvmf_set_crdt", 00:19:15.015 "params": { 00:19:15.015 "crdt1": 0, 00:19:15.015 "crdt2": 0, 00:19:15.015 "crdt3": 0 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "nvmf_create_transport", 00:19:15.015 "params": { 00:19:15.015 "trtype": "TCP", 00:19:15.015 "max_queue_depth": 128, 00:19:15.015 "max_io_qpairs_per_ctrlr": 127, 00:19:15.015 "in_capsule_data_size": 4096, 00:19:15.015 "max_io_size": 131072, 00:19:15.015 "io_unit_size": 131072, 00:19:15.015 "max_aq_depth": 128, 00:19:15.015 "num_shared_buffers": 511, 00:19:15.015 "buf_cache_size": 4294967295, 00:19:15.015 "dif_insert_or_strip": false, 00:19:15.015 "zcopy": false, 00:19:15.015 "c2h_success": false, 00:19:15.015 "sock_priority": 0, 00:19:15.015 "abort_timeout_sec": 1, 00:19:15.015 "ack_timeout": 0, 00:19:15.015 "data_wr_pool_size": 0 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "nvmf_create_subsystem", 00:19:15.015 "params": { 00:19:15.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.015 "allow_any_host": false, 00:19:15.015 "serial_number": "00000000000000000000", 00:19:15.015 "model_number": "SPDK bdev Controller", 00:19:15.015 "max_namespaces": 32, 00:19:15.015 "min_cntlid": 1, 00:19:15.015 "max_cntlid": 65519, 00:19:15.015 "ana_reporting": false 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "nvmf_subsystem_add_host", 00:19:15.015 "params": { 00:19:15.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.015 "host": "nqn.2016-06.io.spdk:host1", 00:19:15.015 "psk": "key0" 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "nvmf_subsystem_add_ns", 00:19:15.015 "params": { 00:19:15.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.015 "namespace": { 00:19:15.015 "nsid": 1, 00:19:15.015 "bdev_name": "malloc0", 00:19:15.015 "nguid": "3D801FAC7BF645E2B900B140A1FB3D14", 00:19:15.015 "uuid": "3d801fac-7bf6-45e2-b900-b140a1fb3d14", 00:19:15.015 "no_auto_visible": false 00:19:15.015 } 00:19:15.015 } 00:19:15.015 }, 00:19:15.015 { 00:19:15.015 "method": "nvmf_subsystem_add_listener", 00:19:15.015 "params": { 00:19:15.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.015 "listen_address": { 00:19:15.015 "trtype": "TCP", 00:19:15.015 "adrfam": "IPv4", 00:19:15.015 "traddr": "10.0.0.2", 00:19:15.015 "trsvcid": "4420" 00:19:15.015 }, 00:19:15.015 "secure_channel": true 00:19:15.015 } 00:19:15.015 } 00:19:15.015 ] 00:19:15.015 } 00:19:15.015 ] 00:19:15.015 }' 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1127103 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1127103 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127103 ']' 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.015 18:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.015 [2024-07-15 18:44:31.666798] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:15.015 [2024-07-15 18:44:31.666845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.015 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.274 [2024-07-15 18:44:31.723358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.274 [2024-07-15 18:44:31.802179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.274 [2024-07-15 18:44:31.802214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.274 [2024-07-15 18:44:31.802220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.274 [2024-07-15 18:44:31.802230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.274 [2024-07-15 18:44:31.802235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.274 [2024-07-15 18:44:31.802282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.532 [2024-07-15 18:44:32.012845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.532 [2024-07-15 18:44:32.044872] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.532 [2024-07-15 18:44:32.054452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.791 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.791 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.791 18:44:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.791 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:15.791 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1127179 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1127179 /var/tmp/bdevperf.sock 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127179 ']' 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.051 18:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:16.051 "subsystems": [ 00:19:16.051 { 00:19:16.051 "subsystem": "keyring", 00:19:16.051 "config": [ 00:19:16.051 { 00:19:16.051 "method": "keyring_file_add_key", 00:19:16.051 "params": { 00:19:16.051 "name": "key0", 00:19:16.051 "path": "/tmp/tmp.rVR2BEBXil" 00:19:16.051 } 00:19:16.051 } 00:19:16.051 ] 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "subsystem": "iobuf", 00:19:16.051 "config": [ 00:19:16.051 { 00:19:16.051 "method": "iobuf_set_options", 00:19:16.051 "params": { 00:19:16.051 "small_pool_count": 8192, 00:19:16.051 "large_pool_count": 1024, 00:19:16.051 "small_bufsize": 8192, 00:19:16.051 "large_bufsize": 135168 00:19:16.051 } 00:19:16.051 } 00:19:16.051 ] 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "subsystem": "sock", 00:19:16.051 "config": [ 00:19:16.051 { 00:19:16.051 "method": "sock_set_default_impl", 00:19:16.051 "params": { 00:19:16.051 "impl_name": "posix" 00:19:16.051 } 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "method": "sock_impl_set_options", 00:19:16.051 "params": { 00:19:16.051 "impl_name": "ssl", 00:19:16.051 "recv_buf_size": 4096, 00:19:16.051 "send_buf_size": 4096, 00:19:16.051 "enable_recv_pipe": true, 00:19:16.051 "enable_quickack": false, 00:19:16.051 "enable_placement_id": 0, 00:19:16.051 "enable_zerocopy_send_server": true, 00:19:16.051 "enable_zerocopy_send_client": false, 00:19:16.051 "zerocopy_threshold": 0, 00:19:16.051 "tls_version": 0, 00:19:16.051 "enable_ktls": false 00:19:16.051 } 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "method": "sock_impl_set_options", 00:19:16.051 "params": { 00:19:16.051 "impl_name": "posix", 00:19:16.051 "recv_buf_size": 2097152, 00:19:16.051 "send_buf_size": 2097152, 00:19:16.051 "enable_recv_pipe": true, 00:19:16.051 "enable_quickack": false, 00:19:16.051 "enable_placement_id": 0, 00:19:16.051 "enable_zerocopy_send_server": true, 00:19:16.051 "enable_zerocopy_send_client": false, 00:19:16.051 "zerocopy_threshold": 0, 00:19:16.051 "tls_version": 0, 00:19:16.051 "enable_ktls": false 00:19:16.051 } 00:19:16.051 } 00:19:16.051 ] 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "subsystem": "vmd", 00:19:16.051 "config": [] 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "subsystem": "accel", 00:19:16.051 "config": [ 00:19:16.051 { 00:19:16.051 "method": "accel_set_options", 00:19:16.051 "params": { 00:19:16.051 "small_cache_size": 128, 00:19:16.051 "large_cache_size": 16, 00:19:16.051 "task_count": 2048, 00:19:16.051 "sequence_count": 2048, 00:19:16.051 "buf_count": 2048 00:19:16.051 } 00:19:16.051 } 00:19:16.051 ] 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "subsystem": "bdev", 00:19:16.051 "config": [ 00:19:16.051 { 00:19:16.051 "method": "bdev_set_options", 00:19:16.051 "params": { 00:19:16.051 "bdev_io_pool_size": 65535, 00:19:16.051 "bdev_io_cache_size": 256, 00:19:16.051 "bdev_auto_examine": true, 00:19:16.051 "iobuf_small_cache_size": 128, 00:19:16.051 "iobuf_large_cache_size": 16 00:19:16.051 } 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "method": "bdev_raid_set_options", 00:19:16.051 "params": { 00:19:16.051 "process_window_size_kb": 1024 00:19:16.051 } 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "method": "bdev_iscsi_set_options", 00:19:16.051 "params": { 00:19:16.051 "timeout_sec": 30 00:19:16.051 } 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "method": "bdev_nvme_set_options", 00:19:16.051 "params": { 00:19:16.051 "action_on_timeout": "none", 00:19:16.051 "timeout_us": 0, 00:19:16.051 "timeout_admin_us": 0, 00:19:16.051 "keep_alive_timeout_ms": 10000, 00:19:16.051 "arbitration_burst": 0, 00:19:16.051 "low_priority_weight": 0, 00:19:16.051 "medium_priority_weight": 0, 00:19:16.051 "high_priority_weight": 0, 00:19:16.051 "nvme_adminq_poll_period_us": 10000, 00:19:16.051 "nvme_ioq_poll_period_us": 0, 00:19:16.051 "io_queue_requests": 512, 00:19:16.051 "delay_cmd_submit": true, 00:19:16.051 "transport_retry_count": 4, 00:19:16.051 "bdev_retry_count": 3, 00:19:16.051 "transport_ack_timeout": 0, 00:19:16.051 "ctrlr_loss_timeout_sec": 0, 00:19:16.051 "reconnect_delay_sec": 0, 00:19:16.051 "fast_io_fail_timeout_sec": 0, 00:19:16.051 "disable_auto_failback": false, 00:19:16.051 "generate_uuids": false, 00:19:16.051 "transport_tos": 0, 00:19:16.051 "nvme_error_stat": false, 00:19:16.051 "rdma_srq_size": 0, 00:19:16.051 "io_path_stat": false, 00:19:16.051 "allow_accel_sequence": false, 00:19:16.051 "rdma_max_cq_size": 0, 00:19:16.051 "rdma_cm_event_timeout_ms": 0, 00:19:16.051 "dhchap_digests": [ 00:19:16.051 "sha256", 00:19:16.051 "sha384", 00:19:16.051 "sha512" 00:19:16.051 ], 00:19:16.051 "dhchap_dhgroups": [ 00:19:16.051 "null", 00:19:16.051 "ffdhe2048", 00:19:16.051 "ffdhe3072", 00:19:16.051 "ffdhe4096", 00:19:16.051 "ffdhe6144", 00:19:16.051 "ffdhe8192" 00:19:16.051 ] 00:19:16.051 } 00:19:16.051 }, 00:19:16.051 { 00:19:16.051 "method": "bdev_nvme_attach_controller", 00:19:16.051 "params": { 00:19:16.051 "name": "nvme0", 00:19:16.051 "trtype": "TCP", 00:19:16.051 "adrfam": "IPv4", 00:19:16.051 "traddr": "10.0.0.2", 00:19:16.051 "trsvcid": "4420", 00:19:16.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.051 "prchk_reftag": false, 00:19:16.051 "prchk_guard": false, 00:19:16.051 "ctrlr_loss_timeout_sec": 0, 00:19:16.051 "reconnect_delay_sec": 0, 00:19:16.051 "fast_io_fail_timeout_sec": 0, 00:19:16.052 "psk": "key0", 00:19:16.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.052 "hdgst": false, 00:19:16.052 "ddgst": false 00:19:16.052 } 00:19:16.052 }, 00:19:16.052 { 00:19:16.052 "method": "bdev_nvme_set_hotplug", 00:19:16.052 "params": { 00:19:16.052 "period_us": 100000, 00:19:16.052 "enable": false 00:19:16.052 } 00:19:16.052 }, 00:19:16.052 { 00:19:16.052 "method": "bdev_enable_histogram", 00:19:16.052 "params": { 00:19:16.052 "name": "nvme0n1", 00:19:16.052 "enable": true 00:19:16.052 } 00:19:16.052 }, 00:19:16.052 { 00:19:16.052 "method": "bdev_wait_for_examine" 00:19:16.052 } 00:19:16.052 ] 00:19:16.052 }, 00:19:16.052 { 00:19:16.052 "subsystem": "nbd", 00:19:16.052 "config": [] 00:19:16.052 } 00:19:16.052 ] 00:19:16.052 }' 00:19:16.052 18:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.052 [2024-07-15 18:44:32.550037] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:16.052 [2024-07-15 18:44:32.550083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127179 ] 00:19:16.052 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.052 [2024-07-15 18:44:32.604834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.052 [2024-07-15 18:44:32.683101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.311 [2024-07-15 18:44:32.833471] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.879 18:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.879 18:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:16.879 18:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:16.879 18:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:16.879 18:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.879 18:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:17.137 Running I/O for 1 seconds... 00:19:18.107 00:19:18.107 Latency(us) 00:19:18.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.107 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:18.107 Verification LBA range: start 0x0 length 0x2000 00:19:18.107 nvme0n1 : 1.02 5090.36 19.88 0.00 0.00 24920.14 5584.81 72944.42 00:19:18.107 =================================================================================================================== 00:19:18.107 Total : 5090.36 19.88 0.00 0.00 24920.14 5584.81 72944.42 00:19:18.107 0 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:18.107 nvmf_trace.0 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1127179 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127179 ']' 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127179 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127179 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127179' 00:19:18.107 killing process with pid 1127179 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127179 00:19:18.107 Received shutdown signal, test time was about 1.000000 seconds 00:19:18.107 00:19:18.107 Latency(us) 00:19:18.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.107 =================================================================================================================== 00:19:18.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.107 18:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127179 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.367 rmmod nvme_tcp 00:19:18.367 rmmod nvme_fabrics 00:19:18.367 rmmod nvme_keyring 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:18.367 18:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1127103 ']' 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1127103 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127103 ']' 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127103 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127103 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127103' 00:19:18.367 killing process with pid 1127103 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127103 00:19:18.367 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127103 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.626 18:44:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.163 18:44:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.163 18:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.y7W1Uaasc8 /tmp/tmp.uDP8jNS9iK /tmp/tmp.rVR2BEBXil 00:19:21.163 00:19:21.163 real 1m23.572s 00:19:21.163 user 2m10.012s 00:19:21.163 sys 0m27.727s 00:19:21.163 18:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:21.163 18:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.163 ************************************ 00:19:21.163 END TEST nvmf_tls 00:19:21.163 ************************************ 00:19:21.163 18:44:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:21.163 18:44:37 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:21.163 18:44:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:21.163 18:44:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.163 18:44:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:21.163 ************************************ 00:19:21.163 START TEST nvmf_fips 00:19:21.163 ************************************ 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:21.163 * Looking for test storage... 00:19:21.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:21.163 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:21.164 Error setting digest 00:19:21.164 00F20A96F67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:21.164 00F20A96F67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.164 18:44:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:26.438 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:26.438 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:26.438 Found net devices under 0000:86:00.0: cvl_0_0 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:26.438 Found net devices under 0000:86:00.1: cvl_0_1 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:19:26.438 00:19:26.438 --- 10.0.0.2 ping statistics --- 00:19:26.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.438 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:19:26.438 00:19:26.438 --- 10.0.0.1 ping statistics --- 00:19:26.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.438 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1131132 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1131132 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1131132 ']' 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.438 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.439 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.439 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.439 18:44:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:26.439 18:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:26.439 [2024-07-15 18:44:42.848420] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:26.439 [2024-07-15 18:44:42.848465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.439 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.439 [2024-07-15 18:44:42.905043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.439 [2024-07-15 18:44:42.977971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.439 [2024-07-15 18:44:42.978008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.439 [2024-07-15 18:44:42.978016] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.439 [2024-07-15 18:44:42.978026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.439 [2024-07-15 18:44:42.978031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.439 [2024-07-15 18:44:42.978065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:27.005 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:27.264 [2024-07-15 18:44:43.813815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.264 [2024-07-15 18:44:43.829812] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.264 [2024-07-15 18:44:43.829991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.264 [2024-07-15 18:44:43.857981] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:27.264 malloc0 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1131269 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1131269 /var/tmp/bdevperf.sock 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1131269 ']' 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.264 18:44:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:27.264 [2024-07-15 18:44:43.934565] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:19:27.264 [2024-07-15 18:44:43.934612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131269 ] 00:19:27.264 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.523 [2024-07-15 18:44:43.984766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.523 [2024-07-15 18:44:44.062993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.090 18:44:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.090 18:44:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:28.090 18:44:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:28.348 [2024-07-15 18:44:44.884721] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.348 [2024-07-15 18:44:44.884789] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:28.348 TLSTESTn1 00:19:28.348 18:44:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.606 Running I/O for 10 seconds... 00:19:38.630 00:19:38.630 Latency(us) 00:19:38.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.630 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.630 Verification LBA range: start 0x0 length 0x2000 00:19:38.630 TLSTESTn1 : 10.03 4470.05 17.46 0.00 0.00 28582.16 4843.97 59267.34 00:19:38.630 =================================================================================================================== 00:19:38.630 Total : 4470.05 17.46 0.00 0.00 28582.16 4843.97 59267.34 00:19:38.630 0 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:38.630 nvmf_trace.0 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1131269 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1131269 ']' 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1131269 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131269 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131269' 00:19:38.630 killing process with pid 1131269 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1131269 00:19:38.630 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.630 00:19:38.630 Latency(us) 00:19:38.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.630 =================================================================================================================== 00:19:38.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.630 [2024-07-15 18:44:55.265573] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:38.630 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1131269 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.888 rmmod nvme_tcp 00:19:38.888 rmmod nvme_fabrics 00:19:38.888 rmmod nvme_keyring 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1131132 ']' 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1131132 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1131132 ']' 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1131132 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131132 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131132' 00:19:38.888 killing process with pid 1131132 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1131132 00:19:38.888 [2024-07-15 18:44:55.551640] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:38.888 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1131132 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.146 18:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.681 18:44:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:41.681 18:44:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.681 00:19:41.681 real 0m20.437s 00:19:41.681 user 0m22.474s 00:19:41.681 sys 0m8.765s 00:19:41.681 18:44:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.681 18:44:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:41.681 ************************************ 00:19:41.681 END TEST nvmf_fips 00:19:41.681 ************************************ 00:19:41.681 18:44:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:41.681 18:44:57 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:41.681 18:44:57 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:41.681 18:44:57 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:41.681 18:44:57 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:41.681 18:44:57 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.681 18:44:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.952 18:45:02 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:46.953 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:46.953 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:46.953 Found net devices under 0000:86:00.0: cvl_0_0 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:46.953 Found net devices under 0000:86:00.1: cvl_0_1 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:46.953 18:45:02 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:46.953 18:45:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:46.953 18:45:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.953 18:45:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.953 ************************************ 00:19:46.953 START TEST nvmf_perf_adq 00:19:46.953 ************************************ 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:46.953 * Looking for test storage... 00:19:46.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:46.953 18:45:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.180 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:51.181 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:51.181 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:51.181 Found net devices under 0000:86:00.0: cvl_0_0 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:51.181 Found net devices under 0000:86:00.1: cvl_0_1 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:51.181 18:45:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:53.084 18:45:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:54.988 18:45:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.263 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.264 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.264 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:00.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:20:00.264 00:20:00.264 --- 10.0.0.2 ping statistics --- 00:20:00.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.264 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:00.264 00:20:00.264 --- 10.0.0.1 ping statistics --- 00:20:00.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.264 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1141575 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1141575 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1141575 ']' 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.264 18:45:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.264 [2024-07-15 18:45:16.569369] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:20:00.264 [2024-07-15 18:45:16.569412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.264 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.264 [2024-07-15 18:45:16.626555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.264 [2024-07-15 18:45:16.707869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.264 [2024-07-15 18:45:16.707905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.264 [2024-07-15 18:45:16.707912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.264 [2024-07-15 18:45:16.707918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.264 [2024-07-15 18:45:16.707922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.264 [2024-07-15 18:45:16.707954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.264 [2024-07-15 18:45:16.708050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.264 [2024-07-15 18:45:16.708128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.264 [2024-07-15 18:45:16.708129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.835 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.094 [2024-07-15 18:45:17.558399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.094 Malloc1 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.094 [2024-07-15 18:45:17.610234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1141641 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:01.094 18:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:01.094 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.994 18:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:02.994 18:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.994 18:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.994 18:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.994 18:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:02.994 "tick_rate": 2300000000, 00:20:02.994 "poll_groups": [ 00:20:02.994 { 00:20:02.994 "name": "nvmf_tgt_poll_group_000", 00:20:02.994 "admin_qpairs": 1, 00:20:02.994 "io_qpairs": 1, 00:20:02.994 "current_admin_qpairs": 1, 00:20:02.994 "current_io_qpairs": 1, 00:20:02.995 "pending_bdev_io": 0, 00:20:02.995 "completed_nvme_io": 20405, 00:20:02.995 "transports": [ 00:20:02.995 { 00:20:02.995 "trtype": "TCP" 00:20:02.995 } 00:20:02.995 ] 00:20:02.995 }, 00:20:02.995 { 00:20:02.995 "name": "nvmf_tgt_poll_group_001", 00:20:02.995 "admin_qpairs": 0, 00:20:02.995 "io_qpairs": 1, 00:20:02.995 "current_admin_qpairs": 0, 00:20:02.995 "current_io_qpairs": 1, 00:20:02.995 "pending_bdev_io": 0, 00:20:02.995 "completed_nvme_io": 20583, 00:20:02.995 "transports": [ 00:20:02.995 { 00:20:02.995 "trtype": "TCP" 00:20:02.995 } 00:20:02.995 ] 00:20:02.995 }, 00:20:02.995 { 00:20:02.995 "name": "nvmf_tgt_poll_group_002", 00:20:02.995 "admin_qpairs": 0, 00:20:02.995 "io_qpairs": 1, 00:20:02.995 "current_admin_qpairs": 0, 00:20:02.995 "current_io_qpairs": 1, 00:20:02.995 "pending_bdev_io": 0, 00:20:02.995 "completed_nvme_io": 20538, 00:20:02.995 "transports": [ 00:20:02.995 { 00:20:02.995 "trtype": "TCP" 00:20:02.995 } 00:20:02.995 ] 00:20:02.995 }, 00:20:02.995 { 00:20:02.995 "name": "nvmf_tgt_poll_group_003", 00:20:02.995 "admin_qpairs": 0, 00:20:02.995 "io_qpairs": 1, 00:20:02.995 "current_admin_qpairs": 0, 00:20:02.995 "current_io_qpairs": 1, 00:20:02.995 "pending_bdev_io": 0, 00:20:02.995 "completed_nvme_io": 20455, 00:20:02.995 "transports": [ 00:20:02.995 { 00:20:02.995 "trtype": "TCP" 00:20:02.995 } 00:20:02.995 ] 00:20:02.995 } 00:20:02.995 ] 00:20:02.995 }' 00:20:02.995 18:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:02.995 18:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:02.995 18:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:02.995 18:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:02.995 18:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1141641 00:20:11.114 Initializing NVMe Controllers 00:20:11.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:11.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:11.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:11.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:11.115 Initialization complete. Launching workers. 00:20:11.115 ======================================================== 00:20:11.115 Latency(us) 00:20:11.115 Device Information : IOPS MiB/s Average min max 00:20:11.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10764.30 42.05 5946.04 2027.62 9253.18 00:20:11.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10896.20 42.56 5873.01 2006.47 10446.58 00:20:11.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10847.60 42.37 5899.64 1771.39 10277.23 00:20:11.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10789.60 42.15 5931.53 2054.13 10732.77 00:20:11.115 ======================================================== 00:20:11.115 Total : 43297.69 169.13 5912.42 1771.39 10732.77 00:20:11.115 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.115 rmmod nvme_tcp 00:20:11.115 rmmod nvme_fabrics 00:20:11.115 rmmod nvme_keyring 00:20:11.115 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1141575 ']' 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1141575 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1141575 ']' 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1141575 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1141575 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1141575' 00:20:11.372 killing process with pid 1141575 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1141575 00:20:11.372 18:45:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1141575 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.372 18:45:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.901 18:45:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.901 18:45:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:13.901 18:45:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:14.837 18:45:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:16.740 18:45:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:22.008 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:22.008 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.008 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.008 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.008 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.008 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.008 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.009 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.009 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.009 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.009 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:22.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:20:22.009 00:20:22.009 --- 10.0.0.2 ping statistics --- 00:20:22.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.009 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:22.009 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:20:22.009 00:20:22.009 --- 10.0.0.1 ping statistics --- 00:20:22.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.009 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:22.010 net.core.busy_poll = 1 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:22.010 net.core.busy_read = 1 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:22.010 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1145410 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1145410 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1145410 ']' 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.313 18:45:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:22.313 [2024-07-15 18:45:38.819857] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:20:22.313 [2024-07-15 18:45:38.819901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.313 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.313 [2024-07-15 18:45:38.878196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.313 [2024-07-15 18:45:38.959043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.313 [2024-07-15 18:45:38.959079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.313 [2024-07-15 18:45:38.959086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.313 [2024-07-15 18:45:38.959092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.313 [2024-07-15 18:45:38.959097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.313 [2024-07-15 18:45:38.959140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.313 [2024-07-15 18:45:38.959157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.313 [2024-07-15 18:45:38.959251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.313 [2024-07-15 18:45:38.959253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 [2024-07-15 18:45:39.816806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 Malloc1 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.280 [2024-07-15 18:45:39.864501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1145678 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:23.280 18:45:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:23.280 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.186 18:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:25.186 18:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.186 18:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.445 18:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.445 18:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:25.445 "tick_rate": 2300000000, 00:20:25.445 "poll_groups": [ 00:20:25.445 { 00:20:25.445 "name": "nvmf_tgt_poll_group_000", 00:20:25.445 "admin_qpairs": 1, 00:20:25.445 "io_qpairs": 3, 00:20:25.445 "current_admin_qpairs": 1, 00:20:25.445 "current_io_qpairs": 3, 00:20:25.445 "pending_bdev_io": 0, 00:20:25.445 "completed_nvme_io": 29354, 00:20:25.445 "transports": [ 00:20:25.445 { 00:20:25.445 "trtype": "TCP" 00:20:25.445 } 00:20:25.445 ] 00:20:25.445 }, 00:20:25.445 { 00:20:25.445 "name": "nvmf_tgt_poll_group_001", 00:20:25.445 "admin_qpairs": 0, 00:20:25.445 "io_qpairs": 1, 00:20:25.445 "current_admin_qpairs": 0, 00:20:25.445 "current_io_qpairs": 1, 00:20:25.445 "pending_bdev_io": 0, 00:20:25.445 "completed_nvme_io": 28137, 00:20:25.445 "transports": [ 00:20:25.445 { 00:20:25.445 "trtype": "TCP" 00:20:25.445 } 00:20:25.445 ] 00:20:25.445 }, 00:20:25.445 { 00:20:25.445 "name": "nvmf_tgt_poll_group_002", 00:20:25.445 "admin_qpairs": 0, 00:20:25.445 "io_qpairs": 0, 00:20:25.445 "current_admin_qpairs": 0, 00:20:25.445 "current_io_qpairs": 0, 00:20:25.445 "pending_bdev_io": 0, 00:20:25.445 "completed_nvme_io": 0, 00:20:25.445 "transports": [ 00:20:25.445 { 00:20:25.445 "trtype": "TCP" 00:20:25.445 } 00:20:25.445 ] 00:20:25.445 }, 00:20:25.445 { 00:20:25.445 "name": "nvmf_tgt_poll_group_003", 00:20:25.445 "admin_qpairs": 0, 00:20:25.445 "io_qpairs": 0, 00:20:25.445 "current_admin_qpairs": 0, 00:20:25.445 "current_io_qpairs": 0, 00:20:25.445 "pending_bdev_io": 0, 00:20:25.445 "completed_nvme_io": 0, 00:20:25.445 "transports": [ 00:20:25.445 { 00:20:25.445 "trtype": "TCP" 00:20:25.445 } 00:20:25.445 ] 00:20:25.445 } 00:20:25.445 ] 00:20:25.445 }' 00:20:25.445 18:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:25.445 18:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:25.445 18:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:25.445 18:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:25.445 18:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1145678 00:20:33.559 Initializing NVMe Controllers 00:20:33.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:33.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:33.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:33.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:33.560 Initialization complete. Launching workers. 00:20:33.560 ======================================================== 00:20:33.560 Latency(us) 00:20:33.560 Device Information : IOPS MiB/s Average min max 00:20:33.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5309.05 20.74 12099.17 1462.10 58458.97 00:20:33.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4917.86 19.21 13021.43 1712.09 58870.40 00:20:33.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5630.15 21.99 11369.19 1471.25 59742.21 00:20:33.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14771.37 57.70 4342.72 1097.25 45263.18 00:20:33.560 ======================================================== 00:20:33.560 Total : 30628.42 119.64 8372.31 1097.25 59742.21 00:20:33.560 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.560 rmmod nvme_tcp 00:20:33.560 rmmod nvme_fabrics 00:20:33.560 rmmod nvme_keyring 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1145410 ']' 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1145410 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1145410 ']' 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1145410 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145410 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145410' 00:20:33.560 killing process with pid 1145410 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1145410 00:20:33.560 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1145410 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.819 18:45:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.110 18:45:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.110 18:45:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:37.110 00:20:37.110 real 0m50.597s 00:20:37.110 user 2m49.326s 00:20:37.110 sys 0m9.099s 00:20:37.110 18:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.110 18:45:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.110 ************************************ 00:20:37.110 END TEST nvmf_perf_adq 00:20:37.110 ************************************ 00:20:37.110 18:45:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:37.110 18:45:53 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:37.110 18:45:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:37.110 18:45:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.110 18:45:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.110 ************************************ 00:20:37.110 START TEST nvmf_shutdown 00:20:37.110 ************************************ 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:37.110 * Looking for test storage... 00:20:37.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:37.110 ************************************ 00:20:37.110 START TEST nvmf_shutdown_tc1 00:20:37.110 ************************************ 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.110 18:45:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:42.389 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:42.389 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:42.389 Found net devices under 0000:86:00.0: cvl_0_0 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:42.389 Found net devices under 0000:86:00.1: cvl_0_1 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.389 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.390 18:45:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:20:42.390 00:20:42.390 --- 10.0.0.2 ping statistics --- 00:20:42.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.390 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:20:42.390 00:20:42.390 --- 10.0.0.1 ping statistics --- 00:20:42.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.390 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:42.390 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1151114 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1151114 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1151114 ']' 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.649 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.650 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.650 [2024-07-15 18:45:59.161333] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:20:42.650 [2024-07-15 18:45:59.161384] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.650 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.650 [2024-07-15 18:45:59.219423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.650 [2024-07-15 18:45:59.299463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.650 [2024-07-15 18:45:59.299496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.650 [2024-07-15 18:45:59.299503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.650 [2024-07-15 18:45:59.299509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.650 [2024-07-15 18:45:59.299514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.650 [2024-07-15 18:45:59.299629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.650 [2024-07-15 18:45:59.299711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.650 [2024-07-15 18:45:59.299819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.650 [2024-07-15 18:45:59.299820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.590 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.590 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:43.590 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.590 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.590 18:45:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.590 [2024-07-15 18:46:00.022123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.590 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.590 Malloc1 00:20:43.590 [2024-07-15 18:46:00.117832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.590 Malloc2 00:20:43.590 Malloc3 00:20:43.590 Malloc4 00:20:43.590 Malloc5 00:20:43.850 Malloc6 00:20:43.850 Malloc7 00:20:43.850 Malloc8 00:20:43.850 Malloc9 00:20:43.850 Malloc10 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1151391 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1151391 /var/tmp/bdevperf.sock 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1151391 ']' 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.850 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.850 { 00:20:43.850 "params": { 00:20:43.850 "name": "Nvme$subsystem", 00:20:43.850 "trtype": "$TEST_TRANSPORT", 00:20:43.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.850 "adrfam": "ipv4", 00:20:43.850 "trsvcid": "$NVMF_PORT", 00:20:43.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.850 "hdgst": ${hdgst:-false}, 00:20:43.850 "ddgst": ${ddgst:-false} 00:20:43.850 }, 00:20:43.850 "method": "bdev_nvme_attach_controller" 00:20:43.850 } 00:20:43.850 EOF 00:20:43.850 )") 00:20:44.110 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.110 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.110 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.110 { 00:20:44.110 "params": { 00:20:44.110 "name": "Nvme$subsystem", 00:20:44.110 "trtype": "$TEST_TRANSPORT", 00:20:44.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.110 "adrfam": "ipv4", 00:20:44.110 "trsvcid": "$NVMF_PORT", 00:20:44.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.110 "hdgst": ${hdgst:-false}, 00:20:44.110 "ddgst": ${ddgst:-false} 00:20:44.110 }, 00:20:44.110 "method": "bdev_nvme_attach_controller" 00:20:44.110 } 00:20:44.110 EOF 00:20:44.110 )") 00:20:44.110 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.110 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.110 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.110 { 00:20:44.110 "params": { 00:20:44.110 "name": "Nvme$subsystem", 00:20:44.110 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.111 { 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme$subsystem", 00:20:44.111 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.111 { 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme$subsystem", 00:20:44.111 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.111 { 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme$subsystem", 00:20:44.111 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.111 { 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme$subsystem", 00:20:44.111 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 [2024-07-15 18:46:00.598914] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:20:44.111 [2024-07-15 18:46:00.598963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.111 { 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme$subsystem", 00:20:44.111 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.111 { 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme$subsystem", 00:20:44.111 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.111 { 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme$subsystem", 00:20:44.111 "trtype": "$TEST_TRANSPORT", 00:20:44.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "$NVMF_PORT", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.111 "hdgst": ${hdgst:-false}, 00:20:44.111 "ddgst": ${ddgst:-false} 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 } 00:20:44.111 EOF 00:20:44.111 )") 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:44.111 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:44.111 18:46:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme1", 00:20:44.111 "trtype": "tcp", 00:20:44.111 "traddr": "10.0.0.2", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "4420", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.111 "hdgst": false, 00:20:44.111 "ddgst": false 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 },{ 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme2", 00:20:44.111 "trtype": "tcp", 00:20:44.111 "traddr": "10.0.0.2", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "4420", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.111 "hdgst": false, 00:20:44.111 "ddgst": false 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 },{ 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme3", 00:20:44.111 "trtype": "tcp", 00:20:44.111 "traddr": "10.0.0.2", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "4420", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:44.111 "hdgst": false, 00:20:44.111 "ddgst": false 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 },{ 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme4", 00:20:44.111 "trtype": "tcp", 00:20:44.111 "traddr": "10.0.0.2", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "4420", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:44.111 "hdgst": false, 00:20:44.111 "ddgst": false 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 },{ 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme5", 00:20:44.111 "trtype": "tcp", 00:20:44.111 "traddr": "10.0.0.2", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "4420", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:44.111 "hdgst": false, 00:20:44.111 "ddgst": false 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 },{ 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme6", 00:20:44.111 "trtype": "tcp", 00:20:44.111 "traddr": "10.0.0.2", 00:20:44.111 "adrfam": "ipv4", 00:20:44.111 "trsvcid": "4420", 00:20:44.111 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:44.111 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:44.111 "hdgst": false, 00:20:44.111 "ddgst": false 00:20:44.111 }, 00:20:44.111 "method": "bdev_nvme_attach_controller" 00:20:44.111 },{ 00:20:44.111 "params": { 00:20:44.111 "name": "Nvme7", 00:20:44.111 "trtype": "tcp", 00:20:44.111 "traddr": "10.0.0.2", 00:20:44.111 "adrfam": "ipv4", 00:20:44.112 "trsvcid": "4420", 00:20:44.112 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:44.112 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:44.112 "hdgst": false, 00:20:44.112 "ddgst": false 00:20:44.112 }, 00:20:44.112 "method": "bdev_nvme_attach_controller" 00:20:44.112 },{ 00:20:44.112 "params": { 00:20:44.112 "name": "Nvme8", 00:20:44.112 "trtype": "tcp", 00:20:44.112 "traddr": "10.0.0.2", 00:20:44.112 "adrfam": "ipv4", 00:20:44.112 "trsvcid": "4420", 00:20:44.112 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:44.112 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:44.112 "hdgst": false, 00:20:44.112 "ddgst": false 00:20:44.112 }, 00:20:44.112 "method": "bdev_nvme_attach_controller" 00:20:44.112 },{ 00:20:44.112 "params": { 00:20:44.112 "name": "Nvme9", 00:20:44.112 "trtype": "tcp", 00:20:44.112 "traddr": "10.0.0.2", 00:20:44.112 "adrfam": "ipv4", 00:20:44.112 "trsvcid": "4420", 00:20:44.112 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:44.112 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:44.112 "hdgst": false, 00:20:44.112 "ddgst": false 00:20:44.112 }, 00:20:44.112 "method": "bdev_nvme_attach_controller" 00:20:44.112 },{ 00:20:44.112 "params": { 00:20:44.112 "name": "Nvme10", 00:20:44.112 "trtype": "tcp", 00:20:44.112 "traddr": "10.0.0.2", 00:20:44.112 "adrfam": "ipv4", 00:20:44.112 "trsvcid": "4420", 00:20:44.112 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:44.112 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:44.112 "hdgst": false, 00:20:44.112 "ddgst": false 00:20:44.112 }, 00:20:44.112 "method": "bdev_nvme_attach_controller" 00:20:44.112 }' 00:20:44.112 [2024-07-15 18:46:00.655881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.112 [2024-07-15 18:46:00.730293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.488 18:46:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.488 18:46:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:45.488 18:46:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:45.488 18:46:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.488 18:46:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 18:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.488 18:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1151391 00:20:45.488 18:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:45.488 18:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:46.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1151391 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1151114 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 [2024-07-15 18:46:03.058178] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:20:46.498 [2024-07-15 18:46:03.058234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151705 ] 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.498 { 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme$subsystem", 00:20:46.498 "trtype": "$TEST_TRANSPORT", 00:20:46.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.498 "adrfam": "ipv4", 00:20:46.498 "trsvcid": "$NVMF_PORT", 00:20:46.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.498 "hdgst": ${hdgst:-false}, 00:20:46.498 "ddgst": ${ddgst:-false} 00:20:46.498 }, 00:20:46.498 "method": "bdev_nvme_attach_controller" 00:20:46.498 } 00:20:46.498 EOF 00:20:46.498 )") 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:46.498 18:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:46.498 "params": { 00:20:46.498 "name": "Nvme1", 00:20:46.498 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme2", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme3", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme4", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme5", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme6", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme7", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme8", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme9", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 },{ 00:20:46.499 "params": { 00:20:46.499 "name": "Nvme10", 00:20:46.499 "trtype": "tcp", 00:20:46.499 "traddr": "10.0.0.2", 00:20:46.499 "adrfam": "ipv4", 00:20:46.499 "trsvcid": "4420", 00:20:46.499 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:46.499 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:46.499 "hdgst": false, 00:20:46.499 "ddgst": false 00:20:46.499 }, 00:20:46.499 "method": "bdev_nvme_attach_controller" 00:20:46.499 }' 00:20:46.499 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.499 [2024-07-15 18:46:03.114856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.499 [2024-07-15 18:46:03.190348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.871 Running I/O for 1 seconds... 00:20:49.241 00:20:49.241 Latency(us) 00:20:49.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.241 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme1n1 : 1.14 281.78 17.61 0.00 0.00 223673.03 6895.53 213362.42 00:20:49.241 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme2n1 : 1.08 237.73 14.86 0.00 0.00 262821.84 16640.45 220656.86 00:20:49.241 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme3n1 : 1.12 284.98 17.81 0.00 0.00 215468.30 15842.62 215186.03 00:20:49.241 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme4n1 : 1.13 283.30 17.71 0.00 0.00 214469.94 15272.74 217009.64 00:20:49.241 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme5n1 : 1.15 279.46 17.47 0.00 0.00 214311.09 22909.11 218833.25 00:20:49.241 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme6n1 : 1.14 280.76 17.55 0.00 0.00 210054.10 21085.50 214274.23 00:20:49.241 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme7n1 : 1.15 278.97 17.44 0.00 0.00 208296.11 13563.10 215186.03 00:20:49.241 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme8n1 : 1.14 284.89 17.81 0.00 0.00 200130.34 5014.93 213362.42 00:20:49.241 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme9n1 : 1.15 277.54 17.35 0.00 0.00 203269.52 16640.45 223392.28 00:20:49.241 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.241 Verification LBA range: start 0x0 length 0x400 00:20:49.241 Nvme10n1 : 1.15 277.12 17.32 0.00 0.00 200505.17 13962.02 238892.97 00:20:49.241 =================================================================================================================== 00:20:49.241 Total : 2766.53 172.91 0.00 0.00 214312.02 5014.93 238892.97 00:20:49.241 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:49.241 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:49.241 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.241 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.241 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:49.241 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:49.242 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:49.242 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.242 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:49.242 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.242 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.242 rmmod nvme_tcp 00:20:49.242 rmmod nvme_fabrics 00:20:49.242 rmmod nvme_keyring 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1151114 ']' 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1151114 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1151114 ']' 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1151114 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.499 18:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1151114 00:20:49.499 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:49.499 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:49.499 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1151114' 00:20:49.499 killing process with pid 1151114 00:20:49.499 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1151114 00:20:49.499 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1151114 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.757 18:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.287 00:20:52.287 real 0m14.756s 00:20:52.287 user 0m33.544s 00:20:52.287 sys 0m5.374s 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.287 ************************************ 00:20:52.287 END TEST nvmf_shutdown_tc1 00:20:52.287 ************************************ 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.287 ************************************ 00:20:52.287 START TEST nvmf_shutdown_tc2 00:20:52.287 ************************************ 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.287 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.287 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.287 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.288 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:20:52.288 00:20:52.288 --- 10.0.0.2 ping statistics --- 00:20:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.288 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:52.288 00:20:52.288 --- 10.0.0.1 ping statistics --- 00:20:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.288 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1152794 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1152794 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1152794 ']' 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.288 18:46:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.288 [2024-07-15 18:46:08.872541] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:20:52.288 [2024-07-15 18:46:08.872585] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.288 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.288 [2024-07-15 18:46:08.929136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.547 [2024-07-15 18:46:09.003550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.547 [2024-07-15 18:46:09.003587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.547 [2024-07-15 18:46:09.003593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.547 [2024-07-15 18:46:09.003600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.547 [2024-07-15 18:46:09.003605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.547 [2024-07-15 18:46:09.003710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.547 [2024-07-15 18:46:09.003774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.547 [2024-07-15 18:46:09.003865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.547 [2024-07-15 18:46:09.003866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.112 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.112 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:53.112 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.112 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.112 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.112 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.112 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.113 [2024-07-15 18:46:09.721258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.113 18:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.113 Malloc1 00:20:53.113 [2024-07-15 18:46:09.816955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.370 Malloc2 00:20:53.370 Malloc3 00:20:53.370 Malloc4 00:20:53.370 Malloc5 00:20:53.370 Malloc6 00:20:53.370 Malloc7 00:20:53.628 Malloc8 00:20:53.628 Malloc9 00:20:53.628 Malloc10 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1153088 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1153088 /var/tmp/bdevperf.sock 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1153088 ']' 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 [2024-07-15 18:46:10.292847] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:20:53.628 [2024-07-15 18:46:10.292900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153088 ] 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.628 { 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme$subsystem", 00:20:53.628 "trtype": "$TEST_TRANSPORT", 00:20:53.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "$NVMF_PORT", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.628 "hdgst": ${hdgst:-false}, 00:20:53.628 "ddgst": ${ddgst:-false} 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 } 00:20:53.628 EOF 00:20:53.628 )") 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:53.628 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:53.628 18:46:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme1", 00:20:53.628 "trtype": "tcp", 00:20:53.628 "traddr": "10.0.0.2", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "4420", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.628 "hdgst": false, 00:20:53.628 "ddgst": false 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 },{ 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme2", 00:20:53.628 "trtype": "tcp", 00:20:53.628 "traddr": "10.0.0.2", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "4420", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.628 "hdgst": false, 00:20:53.628 "ddgst": false 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 },{ 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme3", 00:20:53.628 "trtype": "tcp", 00:20:53.628 "traddr": "10.0.0.2", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "4420", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.628 "hdgst": false, 00:20:53.628 "ddgst": false 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 },{ 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme4", 00:20:53.628 "trtype": "tcp", 00:20:53.628 "traddr": "10.0.0.2", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "4420", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.628 "hdgst": false, 00:20:53.628 "ddgst": false 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 },{ 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme5", 00:20:53.628 "trtype": "tcp", 00:20:53.628 "traddr": "10.0.0.2", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "4420", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.628 "hdgst": false, 00:20:53.628 "ddgst": false 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 },{ 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme6", 00:20:53.628 "trtype": "tcp", 00:20:53.628 "traddr": "10.0.0.2", 00:20:53.628 "adrfam": "ipv4", 00:20:53.628 "trsvcid": "4420", 00:20:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.628 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.628 "hdgst": false, 00:20:53.628 "ddgst": false 00:20:53.628 }, 00:20:53.628 "method": "bdev_nvme_attach_controller" 00:20:53.628 },{ 00:20:53.628 "params": { 00:20:53.628 "name": "Nvme7", 00:20:53.628 "trtype": "tcp", 00:20:53.629 "traddr": "10.0.0.2", 00:20:53.629 "adrfam": "ipv4", 00:20:53.629 "trsvcid": "4420", 00:20:53.629 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.629 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.629 "hdgst": false, 00:20:53.629 "ddgst": false 00:20:53.629 }, 00:20:53.629 "method": "bdev_nvme_attach_controller" 00:20:53.629 },{ 00:20:53.629 "params": { 00:20:53.629 "name": "Nvme8", 00:20:53.629 "trtype": "tcp", 00:20:53.629 "traddr": "10.0.0.2", 00:20:53.629 "adrfam": "ipv4", 00:20:53.629 "trsvcid": "4420", 00:20:53.629 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.629 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.629 "hdgst": false, 00:20:53.629 "ddgst": false 00:20:53.629 }, 00:20:53.629 "method": "bdev_nvme_attach_controller" 00:20:53.629 },{ 00:20:53.629 "params": { 00:20:53.629 "name": "Nvme9", 00:20:53.629 "trtype": "tcp", 00:20:53.629 "traddr": "10.0.0.2", 00:20:53.629 "adrfam": "ipv4", 00:20:53.629 "trsvcid": "4420", 00:20:53.629 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.629 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.629 "hdgst": false, 00:20:53.629 "ddgst": false 00:20:53.629 }, 00:20:53.629 "method": "bdev_nvme_attach_controller" 00:20:53.629 },{ 00:20:53.629 "params": { 00:20:53.629 "name": "Nvme10", 00:20:53.629 "trtype": "tcp", 00:20:53.629 "traddr": "10.0.0.2", 00:20:53.629 "adrfam": "ipv4", 00:20:53.629 "trsvcid": "4420", 00:20:53.629 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.629 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.629 "hdgst": false, 00:20:53.629 "ddgst": false 00:20:53.629 }, 00:20:53.629 "method": "bdev_nvme_attach_controller" 00:20:53.629 }' 00:20:53.886 [2024-07-15 18:46:10.350181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.886 [2024-07-15 18:46:10.425544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.797 Running I/O for 10 seconds... 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.797 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1153088 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1153088 ']' 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1153088 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1153088 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1153088' 00:20:56.056 killing process with pid 1153088 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1153088 00:20:56.056 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1153088 00:20:56.056 Received shutdown signal, test time was about 0.621531 seconds 00:20:56.056 00:20:56.056 Latency(us) 00:20:56.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.056 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.056 Verification LBA range: start 0x0 length 0x400 00:20:56.056 Nvme1n1 : 0.61 315.17 19.70 0.00 0.00 199930.81 24162.84 202420.76 00:20:56.056 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.056 Verification LBA range: start 0x0 length 0x400 00:20:56.056 Nvme2n1 : 0.59 215.85 13.49 0.00 0.00 282652.94 24390.79 202420.76 00:20:56.056 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.056 Verification LBA range: start 0x0 length 0x400 00:20:56.056 Nvme3n1 : 0.60 317.60 19.85 0.00 0.00 187853.84 14816.83 213362.42 00:20:56.056 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.056 Verification LBA range: start 0x0 length 0x400 00:20:56.056 Nvme4n1 : 0.62 301.19 18.82 0.00 0.00 192025.81 14816.83 220656.86 00:20:56.056 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.056 Verification LBA range: start 0x0 length 0x400 00:20:56.056 Nvme5n1 : 0.59 226.93 14.18 0.00 0.00 244378.05 3846.68 193302.71 00:20:56.056 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.056 Verification LBA range: start 0x0 length 0x400 00:20:56.056 Nvme6n1 : 0.62 310.62 19.41 0.00 0.00 176541.76 16526.47 198773.54 00:20:56.056 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.056 Verification LBA range: start 0x0 length 0x400 00:20:56.057 Nvme7n1 : 0.62 311.97 19.50 0.00 0.00 170475.52 16526.47 219745.06 00:20:56.057 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.057 Verification LBA range: start 0x0 length 0x400 00:20:56.057 Nvme8n1 : 0.61 314.05 19.63 0.00 0.00 163579.70 14246.96 196038.12 00:20:56.057 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.057 Verification LBA range: start 0x0 length 0x400 00:20:56.057 Nvme9n1 : 0.60 215.06 13.44 0.00 0.00 229839.47 28835.84 224304.08 00:20:56.057 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.057 Verification LBA range: start 0x0 length 0x400 00:20:56.057 Nvme10n1 : 0.60 213.09 13.32 0.00 0.00 224900.67 23934.89 238892.97 00:20:56.057 =================================================================================================================== 00:20:56.057 Total : 2741.52 171.34 0.00 0.00 201494.41 3846.68 238892.97 00:20:56.315 18:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1152794 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.247 rmmod nvme_tcp 00:20:57.247 rmmod nvme_fabrics 00:20:57.247 rmmod nvme_keyring 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1152794 ']' 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1152794 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1152794 ']' 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1152794 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.247 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152794 00:20:57.504 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:57.504 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:57.504 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152794' 00:20:57.504 killing process with pid 1152794 00:20:57.505 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1152794 00:20:57.505 18:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1152794 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.762 18:46:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.295 00:21:00.295 real 0m7.919s 00:21:00.295 user 0m23.928s 00:21:00.295 sys 0m1.215s 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.295 ************************************ 00:21:00.295 END TEST nvmf_shutdown_tc2 00:21:00.295 ************************************ 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.295 ************************************ 00:21:00.295 START TEST nvmf_shutdown_tc3 00:21:00.295 ************************************ 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.295 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.295 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.295 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.296 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.296 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:00.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:00.296 00:21:00.296 --- 10.0.0.2 ping statistics --- 00:21:00.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.296 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:21:00.296 00:21:00.296 --- 10.0.0.1 ping statistics --- 00:21:00.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.296 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1154231 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1154231 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1154231 ']' 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.296 18:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.296 [2024-07-15 18:46:16.865632] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:00.296 [2024-07-15 18:46:16.865676] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.296 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.296 [2024-07-15 18:46:16.924615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.296 [2024-07-15 18:46:16.997978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.296 [2024-07-15 18:46:16.998020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.296 [2024-07-15 18:46:16.998027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.296 [2024-07-15 18:46:16.998034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.296 [2024-07-15 18:46:16.998039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.296 [2024-07-15 18:46:16.998135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.296 [2024-07-15 18:46:16.998218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.296 [2024-07-15 18:46:16.998304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.296 [2024-07-15 18:46:16.998305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.231 [2024-07-15 18:46:17.704277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:01.231 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:01.232 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.232 18:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.232 Malloc1 00:21:01.232 [2024-07-15 18:46:17.795886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.232 Malloc2 00:21:01.232 Malloc3 00:21:01.232 Malloc4 00:21:01.490 Malloc5 00:21:01.490 Malloc6 00:21:01.490 Malloc7 00:21:01.490 Malloc8 00:21:01.490 Malloc9 00:21:01.490 Malloc10 00:21:01.490 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.490 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:01.490 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.490 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1154516 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1154516 /var/tmp/bdevperf.sock 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1154516 ']' 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.750 { 00:21:01.750 "params": { 00:21:01.750 "name": "Nvme$subsystem", 00:21:01.750 "trtype": "$TEST_TRANSPORT", 00:21:01.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.750 "adrfam": "ipv4", 00:21:01.750 "trsvcid": "$NVMF_PORT", 00:21:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.750 "hdgst": ${hdgst:-false}, 00:21:01.750 "ddgst": ${ddgst:-false} 00:21:01.750 }, 00:21:01.750 "method": "bdev_nvme_attach_controller" 00:21:01.750 } 00:21:01.750 EOF 00:21:01.750 )") 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.750 { 00:21:01.750 "params": { 00:21:01.750 "name": "Nvme$subsystem", 00:21:01.750 "trtype": "$TEST_TRANSPORT", 00:21:01.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.750 "adrfam": "ipv4", 00:21:01.750 "trsvcid": "$NVMF_PORT", 00:21:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.750 "hdgst": ${hdgst:-false}, 00:21:01.750 "ddgst": ${ddgst:-false} 00:21:01.750 }, 00:21:01.750 "method": "bdev_nvme_attach_controller" 00:21:01.750 } 00:21:01.750 EOF 00:21:01.750 )") 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.750 { 00:21:01.750 "params": { 00:21:01.750 "name": "Nvme$subsystem", 00:21:01.750 "trtype": "$TEST_TRANSPORT", 00:21:01.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.750 "adrfam": "ipv4", 00:21:01.750 "trsvcid": "$NVMF_PORT", 00:21:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.750 "hdgst": ${hdgst:-false}, 00:21:01.750 "ddgst": ${ddgst:-false} 00:21:01.750 }, 00:21:01.750 "method": "bdev_nvme_attach_controller" 00:21:01.750 } 00:21:01.750 EOF 00:21:01.750 )") 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.750 { 00:21:01.750 "params": { 00:21:01.750 "name": "Nvme$subsystem", 00:21:01.750 "trtype": "$TEST_TRANSPORT", 00:21:01.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.750 "adrfam": "ipv4", 00:21:01.750 "trsvcid": "$NVMF_PORT", 00:21:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.750 "hdgst": ${hdgst:-false}, 00:21:01.750 "ddgst": ${ddgst:-false} 00:21:01.750 }, 00:21:01.750 "method": "bdev_nvme_attach_controller" 00:21:01.750 } 00:21:01.750 EOF 00:21:01.750 )") 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.750 { 00:21:01.750 "params": { 00:21:01.750 "name": "Nvme$subsystem", 00:21:01.750 "trtype": "$TEST_TRANSPORT", 00:21:01.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.750 "adrfam": "ipv4", 00:21:01.750 "trsvcid": "$NVMF_PORT", 00:21:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.750 "hdgst": ${hdgst:-false}, 00:21:01.750 "ddgst": ${ddgst:-false} 00:21:01.750 }, 00:21:01.750 "method": "bdev_nvme_attach_controller" 00:21:01.750 } 00:21:01.750 EOF 00:21:01.750 )") 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.750 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.750 { 00:21:01.750 "params": { 00:21:01.750 "name": "Nvme$subsystem", 00:21:01.750 "trtype": "$TEST_TRANSPORT", 00:21:01.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "$NVMF_PORT", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.751 "hdgst": ${hdgst:-false}, 00:21:01.751 "ddgst": ${ddgst:-false} 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 } 00:21:01.751 EOF 00:21:01.751 )") 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.751 { 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme$subsystem", 00:21:01.751 "trtype": "$TEST_TRANSPORT", 00:21:01.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "$NVMF_PORT", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.751 "hdgst": ${hdgst:-false}, 00:21:01.751 "ddgst": ${ddgst:-false} 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 } 00:21:01.751 EOF 00:21:01.751 )") 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.751 [2024-07-15 18:46:18.267559] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:01.751 [2024-07-15 18:46:18.267606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154516 ] 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.751 { 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme$subsystem", 00:21:01.751 "trtype": "$TEST_TRANSPORT", 00:21:01.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "$NVMF_PORT", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.751 "hdgst": ${hdgst:-false}, 00:21:01.751 "ddgst": ${ddgst:-false} 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 } 00:21:01.751 EOF 00:21:01.751 )") 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.751 { 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme$subsystem", 00:21:01.751 "trtype": "$TEST_TRANSPORT", 00:21:01.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "$NVMF_PORT", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.751 "hdgst": ${hdgst:-false}, 00:21:01.751 "ddgst": ${ddgst:-false} 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 } 00:21:01.751 EOF 00:21:01.751 )") 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.751 { 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme$subsystem", 00:21:01.751 "trtype": "$TEST_TRANSPORT", 00:21:01.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "$NVMF_PORT", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.751 "hdgst": ${hdgst:-false}, 00:21:01.751 "ddgst": ${ddgst:-false} 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 } 00:21:01.751 EOF 00:21:01.751 )") 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:01.751 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:01.751 18:46:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme1", 00:21:01.751 "trtype": "tcp", 00:21:01.751 "traddr": "10.0.0.2", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "4420", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.751 "hdgst": false, 00:21:01.751 "ddgst": false 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 },{ 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme2", 00:21:01.751 "trtype": "tcp", 00:21:01.751 "traddr": "10.0.0.2", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "4420", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.751 "hdgst": false, 00:21:01.751 "ddgst": false 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 },{ 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme3", 00:21:01.751 "trtype": "tcp", 00:21:01.751 "traddr": "10.0.0.2", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "4420", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:01.751 "hdgst": false, 00:21:01.751 "ddgst": false 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 },{ 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme4", 00:21:01.751 "trtype": "tcp", 00:21:01.751 "traddr": "10.0.0.2", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "4420", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:01.751 "hdgst": false, 00:21:01.751 "ddgst": false 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 },{ 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme5", 00:21:01.751 "trtype": "tcp", 00:21:01.751 "traddr": "10.0.0.2", 00:21:01.751 "adrfam": "ipv4", 00:21:01.751 "trsvcid": "4420", 00:21:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:01.751 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:01.751 "hdgst": false, 00:21:01.751 "ddgst": false 00:21:01.751 }, 00:21:01.751 "method": "bdev_nvme_attach_controller" 00:21:01.751 },{ 00:21:01.751 "params": { 00:21:01.751 "name": "Nvme6", 00:21:01.751 "trtype": "tcp", 00:21:01.751 "traddr": "10.0.0.2", 00:21:01.752 "adrfam": "ipv4", 00:21:01.752 "trsvcid": "4420", 00:21:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:01.752 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:01.752 "hdgst": false, 00:21:01.752 "ddgst": false 00:21:01.752 }, 00:21:01.752 "method": "bdev_nvme_attach_controller" 00:21:01.752 },{ 00:21:01.752 "params": { 00:21:01.752 "name": "Nvme7", 00:21:01.752 "trtype": "tcp", 00:21:01.752 "traddr": "10.0.0.2", 00:21:01.752 "adrfam": "ipv4", 00:21:01.752 "trsvcid": "4420", 00:21:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:01.752 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:01.752 "hdgst": false, 00:21:01.752 "ddgst": false 00:21:01.752 }, 00:21:01.752 "method": "bdev_nvme_attach_controller" 00:21:01.752 },{ 00:21:01.752 "params": { 00:21:01.752 "name": "Nvme8", 00:21:01.752 "trtype": "tcp", 00:21:01.752 "traddr": "10.0.0.2", 00:21:01.752 "adrfam": "ipv4", 00:21:01.752 "trsvcid": "4420", 00:21:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:01.752 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:01.752 "hdgst": false, 00:21:01.752 "ddgst": false 00:21:01.752 }, 00:21:01.752 "method": "bdev_nvme_attach_controller" 00:21:01.752 },{ 00:21:01.752 "params": { 00:21:01.752 "name": "Nvme9", 00:21:01.752 "trtype": "tcp", 00:21:01.752 "traddr": "10.0.0.2", 00:21:01.752 "adrfam": "ipv4", 00:21:01.752 "trsvcid": "4420", 00:21:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:01.752 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:01.752 "hdgst": false, 00:21:01.752 "ddgst": false 00:21:01.752 }, 00:21:01.752 "method": "bdev_nvme_attach_controller" 00:21:01.752 },{ 00:21:01.752 "params": { 00:21:01.752 "name": "Nvme10", 00:21:01.752 "trtype": "tcp", 00:21:01.752 "traddr": "10.0.0.2", 00:21:01.752 "adrfam": "ipv4", 00:21:01.752 "trsvcid": "4420", 00:21:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:01.752 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:01.752 "hdgst": false, 00:21:01.752 "ddgst": false 00:21:01.752 }, 00:21:01.752 "method": "bdev_nvme_attach_controller" 00:21:01.752 }' 00:21:01.752 [2024-07-15 18:46:18.322772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.752 [2024-07-15 18:46:18.396746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.126 Running I/O for 10 seconds... 00:21:03.126 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.126 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:03.126 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:03.126 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.126 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.385 18:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.385 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:03.385 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:03.385 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.643 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:03.644 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:03.644 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1154231 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1154231 ']' 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1154231 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.902 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1154231 00:21:04.187 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.187 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.187 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1154231' 00:21:04.187 killing process with pid 1154231 00:21:04.187 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1154231 00:21:04.187 18:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1154231 00:21:04.187 [2024-07-15 18:46:20.646751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.646998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.187 [2024-07-15 18:46:20.647113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.647219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e430 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.648920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780e30 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.188 [2024-07-15 18:46:20.650298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.650626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e8d0 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.189 [2024-07-15 18:46:20.652317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.652458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77ed90 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.653996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.654002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f230 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.654710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.654726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.654732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.654740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.654748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.190 [2024-07-15 18:46:20.654754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.654998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f6d0 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77fb90 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.655930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77fb90 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.191 [2024-07-15 18:46:20.656887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.656998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780030 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.657994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.192 [2024-07-15 18:46:20.658222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.658233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.658239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.658245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.658253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.658260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.660592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21190 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.660705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca8d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.660785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d340 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.660870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b38b0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.660952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.660987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.660994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45bf0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.661031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b1d0 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.661111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efec70 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.661188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3050 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.661274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.193 [2024-07-15 18:46:20.661325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.193 [2024-07-15 18:46:20.661331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b30 is same with the state(5) to be set 00:21:04.193 [2024-07-15 18:46:20.661608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.193 [2024-07-15 18:46:20.661626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.661989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.661995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.194 [2024-07-15 18:46:20.662267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.194 [2024-07-15 18:46:20.662275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.662582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.662649] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2040910 was disconnected and freed. reset controller. 00:21:04.195 [2024-07-15 18:46:20.664156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:04.195 [2024-07-15 18:46:20.664188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b30 (9): Bad file descriptor 00:21:04.195 [2024-07-15 18:46:20.665096] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.195 [2024-07-15 18:46:20.665245] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.195 [2024-07-15 18:46:20.665423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.195 [2024-07-15 18:46:20.665439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b30 with addr=10.0.0.2, port=4420 00:21:04.195 [2024-07-15 18:46:20.665447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b30 is same with the state(5) to be set 00:21:04.195 [2024-07-15 18:46:20.665491] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.195 [2024-07-15 18:46:20.665532] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.195 [2024-07-15 18:46:20.665572] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.195 [2024-07-15 18:46:20.665616] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.195 [2024-07-15 18:46:20.665660] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.195 [2024-07-15 18:46:20.665691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.195 [2024-07-15 18:46:20.665864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.195 [2024-07-15 18:46:20.665873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.665988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.665997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.666129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.196 [2024-07-15 18:46:20.666137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.196 [2024-07-15 18:46:20.667096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7804d0 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.667999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.668005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.668012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.196 [2024-07-15 18:46:20.668020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.668117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780970 is same with the state(5) to be set 00:21:04.197 [2024-07-15 18:46:20.675183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.197 [2024-07-15 18:46:20.675940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.197 [2024-07-15 18:46:20.675949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.675959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041de0 is same with the state(5) to be set 00:21:04.198 [2024-07-15 18:46:20.676032] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2041de0 was disconnected and freed. reset controller. 00:21:04.198 [2024-07-15 18:46:20.676181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b30 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.198 [2024-07-15 18:46:20.676250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.676261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.198 [2024-07-15 18:46:20.676270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.676280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.198 [2024-07-15 18:46:20.676290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.676300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.198 [2024-07-15 18:46:20.676309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.676318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca0d0 is same with the state(5) to be set 00:21:04.198 [2024-07-15 18:46:20.676339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f21190 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ca8d0 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d340 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b38b0 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45bf0 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3b1d0 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efec70 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d3050 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.676494] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.198 [2024-07-15 18:46:20.677950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:04.198 [2024-07-15 18:46:20.677985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:04.198 [2024-07-15 18:46:20.677994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:04.198 [2024-07-15 18:46:20.678004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:04.198 [2024-07-15 18:46:20.678154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.198 [2024-07-15 18:46:20.678448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.198 [2024-07-15 18:46:20.678465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f21190 with addr=10.0.0.2, port=4420 00:21:04.198 [2024-07-15 18:46:20.678476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21190 is same with the state(5) to be set 00:21:04.198 [2024-07-15 18:46:20.679489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f21190 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.679565] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.198 [2024-07-15 18:46:20.679697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.679711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.679729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.679739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.679761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.679773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.679783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.679794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.679804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.679816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.679830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.679840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027a60 is same with the state(5) to be set 00:21:04.198 [2024-07-15 18:46:20.679901] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2027a60 was disconnected and freed. reset controller. 00:21:04.198 [2024-07-15 18:46:20.679927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:04.198 [2024-07-15 18:46:20.679937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:04.198 [2024-07-15 18:46:20.679947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:04.198 [2024-07-15 18:46:20.680972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.198 [2024-07-15 18:46:20.680988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:04.198 [2024-07-15 18:46:20.681003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ca0d0 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.681675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.198 [2024-07-15 18:46:20.681699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ca0d0 with addr=10.0.0.2, port=4420 00:21:04.198 [2024-07-15 18:46:20.681709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca0d0 is same with the state(5) to be set 00:21:04.198 [2024-07-15 18:46:20.681764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ca0d0 (9): Bad file descriptor 00:21:04.198 [2024-07-15 18:46:20.681816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:04.198 [2024-07-15 18:46:20.681827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:04.198 [2024-07-15 18:46:20.681836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:04.198 [2024-07-15 18:46:20.681882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.198 [2024-07-15 18:46:20.686354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.198 [2024-07-15 18:46:20.686689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.198 [2024-07-15 18:46:20.686697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.686983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.686992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.199 [2024-07-15 18:46:20.687473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.199 [2024-07-15 18:46:20.687484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.687702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.687712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203fea0 is same with the state(5) to be set 00:21:04.200 [2024-07-15 18:46:20.689047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.200 [2024-07-15 18:46:20.689713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.200 [2024-07-15 18:46:20.689722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.689982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.689991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.690385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.690395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6490 is same with the state(5) to be set 00:21:04.201 [2024-07-15 18:46:20.691749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.201 [2024-07-15 18:46:20.691952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.201 [2024-07-15 18:46:20.691961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.691973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.691982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.691993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.202 [2024-07-15 18:46:20.692765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.202 [2024-07-15 18:46:20.692775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.692987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.692994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.693003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.693010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.693019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.693026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.693034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.693041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.693048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc7920 is same with the state(5) to be set 00:21:04.203 [2024-07-15 18:46:20.694064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.203 [2024-07-15 18:46:20.694470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.203 [2024-07-15 18:46:20.694477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.204 [2024-07-15 18:46:20.694958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.204 [2024-07-15 18:46:20.694964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.694973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.694979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.694987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.694993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.695001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.695009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.695017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.695023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.695031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.695039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.695048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.695055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.695062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef8b70 is same with the state(5) to be set 00:21:04.205 [2024-07-15 18:46:20.696081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.696475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.696482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.205 [2024-07-15 18:46:20.703453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.205 [2024-07-15 18:46:20.703466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.703981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.703992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.704004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.704013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.704025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.704033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.704045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.704054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.704065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efa040 is same with the state(5) to be set 00:21:04.206 [2024-07-15 18:46:20.705393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.206 [2024-07-15 18:46:20.705616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.206 [2024-07-15 18:46:20.705625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.705983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.705995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.207 [2024-07-15 18:46:20.706493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.207 [2024-07-15 18:46:20.706505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.706740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.706751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20432b0 is same with the state(5) to be set 00:21:04.208 [2024-07-15 18:46:20.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.208 [2024-07-15 18:46:20.709930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.208 [2024-07-15 18:46:20.709941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.709950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.709961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.709971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.709983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.709992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.209 [2024-07-15 18:46:20.710656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.209 [2024-07-15 18:46:20.710667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.210 [2024-07-15 18:46:20.710848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.210 [2024-07-15 18:46:20.710857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2028ef0 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.712478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:04.210 [2024-07-15 18:46:20.712502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:04.210 [2024-07-15 18:46:20.712514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:04.210 [2024-07-15 18:46:20.712526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:04.210 [2024-07-15 18:46:20.712612] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.712629] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.712643] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.712656] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.712751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:04.210 [2024-07-15 18:46:20.712766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:04.210 [2024-07-15 18:46:20.712777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:04.210 task offset: 24576 on job bdev=Nvme6n1 fails 00:21:04.210 00:21:04.210 Latency(us) 00:21:04.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.210 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme1n1 ended in about 0.89 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme1n1 : 0.89 219.57 13.72 71.69 0.00 217498.97 6354.14 218833.25 00:21:04.210 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme2n1 : 0.90 214.44 13.40 71.48 0.00 217623.15 16184.54 229774.91 00:21:04.210 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme3n1 ended in about 0.90 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme3n1 : 0.90 219.40 13.71 71.28 0.00 210175.03 14816.83 201508.95 00:21:04.210 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme4n1 ended in about 0.90 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme4n1 : 0.90 213.35 13.33 71.12 0.00 210786.62 17666.23 213362.42 00:21:04.210 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme5n1 : 0.91 211.22 13.20 70.41 0.00 209067.19 16754.42 218833.25 00:21:04.210 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme6n1 ended in about 0.87 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme6n1 : 0.87 221.20 13.82 73.73 0.00 194975.25 3034.60 219745.06 00:21:04.210 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme7n1 ended in about 0.88 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme7n1 : 0.88 217.79 13.61 72.60 0.00 194360.54 13449.13 223392.28 00:21:04.210 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme8n1 ended in about 0.91 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme8n1 : 0.91 210.60 13.16 70.20 0.00 197947.88 14246.96 216097.84 00:21:04.210 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme9n1 ended in about 0.88 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme9n1 : 0.88 214.76 13.42 6.78 0.00 244552.50 17552.25 229774.91 00:21:04.210 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.210 Job: Nvme10n1 ended in about 0.92 seconds with error 00:21:04.210 Verification LBA range: start 0x0 length 0x400 00:21:04.210 Nvme10n1 : 0.92 144.14 9.01 69.89 0.00 249818.35 22111.28 246187.41 00:21:04.210 =================================================================================================================== 00:21:04.210 Total : 2086.47 130.40 649.18 0.00 213083.23 3034.60 246187.41 00:21:04.210 [2024-07-15 18:46:20.736280] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:04.210 [2024-07-15 18:46:20.736319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:04.210 [2024-07-15 18:46:20.736667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.736687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b30 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.736697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b30 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.736870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.736882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efec70 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.736896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efec70 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.737075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.737085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ca8d0 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.737092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca8d0 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.737344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.737357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d3050 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.737365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3050 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.738973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:04.210 [2024-07-15 18:46:20.739287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.739303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3b1d0 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.739311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3b1d0 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.739557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.739569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4d340 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.739577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4d340 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.739831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.739843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f45bf0 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.739851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45bf0 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.740016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.210 [2024-07-15 18:46:20.740029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b38b0 with addr=10.0.0.2, port=4420 00:21:04.210 [2024-07-15 18:46:20.740036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b38b0 is same with the state(5) to be set 00:21:04.210 [2024-07-15 18:46:20.740050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b30 (9): Bad file descriptor 00:21:04.210 [2024-07-15 18:46:20.740061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efec70 (9): Bad file descriptor 00:21:04.210 [2024-07-15 18:46:20.740070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ca8d0 (9): Bad file descriptor 00:21:04.210 [2024-07-15 18:46:20.740079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d3050 (9): Bad file descriptor 00:21:04.210 [2024-07-15 18:46:20.740105] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.740123] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.740133] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.740143] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.740154] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:04.210 [2024-07-15 18:46:20.740228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:04.210 [2024-07-15 18:46:20.740431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.211 [2024-07-15 18:46:20.740444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f21190 with addr=10.0.0.2, port=4420 00:21:04.211 [2024-07-15 18:46:20.740451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21190 is same with the state(5) to be set 00:21:04.211 [2024-07-15 18:46:20.740460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3b1d0 (9): Bad file descriptor 00:21:04.211 [2024-07-15 18:46:20.740469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4d340 (9): Bad file descriptor 00:21:04.211 [2024-07-15 18:46:20.740478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45bf0 (9): Bad file descriptor 00:21:04.211 [2024-07-15 18:46:20.740487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b38b0 (9): Bad file descriptor 00:21:04.211 [2024-07-15 18:46:20.740495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.740660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.740668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.740674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.740835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.211 [2024-07-15 18:46:20.740846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ca0d0 with addr=10.0.0.2, port=4420 00:21:04.211 [2024-07-15 18:46:20.740853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca0d0 is same with the state(5) to be set 00:21:04.211 [2024-07-15 18:46:20.740862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f21190 (9): Bad file descriptor 00:21:04.211 [2024-07-15 18:46:20.740869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.740944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.740950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:04.211 [2024-07-15 18:46:20.740975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.740983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.740989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.740994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.741001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ca0d0 (9): Bad file descriptor 00:21:04.211 [2024-07-15 18:46:20.741008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.741014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.741020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:04.211 [2024-07-15 18:46:20.741048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.211 [2024-07-15 18:46:20.741056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:04.211 [2024-07-15 18:46:20.741061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:04.211 [2024-07-15 18:46:20.741067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:04.211 [2024-07-15 18:46:20.741090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:04.491 18:46:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:04.491 18:46:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1154516 00:21:05.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1154516) - No such process 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.427 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.427 rmmod nvme_tcp 00:21:05.427 rmmod nvme_fabrics 00:21:05.687 rmmod nvme_keyring 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.687 18:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.589 18:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.589 00:21:07.589 real 0m7.719s 00:21:07.589 user 0m18.859s 00:21:07.589 sys 0m1.255s 00:21:07.589 18:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.589 18:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.589 ************************************ 00:21:07.589 END TEST nvmf_shutdown_tc3 00:21:07.589 ************************************ 00:21:07.589 18:46:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:07.589 18:46:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:07.589 00:21:07.589 real 0m30.738s 00:21:07.589 user 1m16.464s 00:21:07.589 sys 0m8.080s 00:21:07.589 18:46:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.589 18:46:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.589 ************************************ 00:21:07.589 END TEST nvmf_shutdown 00:21:07.589 ************************************ 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:07.848 18:46:24 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.848 18:46:24 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.848 18:46:24 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:07.848 18:46:24 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.848 18:46:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.848 ************************************ 00:21:07.848 START TEST nvmf_multicontroller 00:21:07.848 ************************************ 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:07.848 * Looking for test storage... 00:21:07.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.848 18:46:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.849 18:46:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.116 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:13.117 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:13.117 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:13.117 Found net devices under 0000:86:00.0: cvl_0_0 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:13.117 Found net devices under 0000:86:00.1: cvl_0_1 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:13.117 00:21:13.117 --- 10.0.0.2 ping statistics --- 00:21:13.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.117 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:21:13.117 00:21:13.117 --- 10.0.0.1 ping statistics --- 00:21:13.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.117 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1158555 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1158555 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1158555 ']' 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.117 18:46:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:13.117 [2024-07-15 18:46:29.528161] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:13.117 [2024-07-15 18:46:29.528204] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.117 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.117 [2024-07-15 18:46:29.585150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:13.117 [2024-07-15 18:46:29.664453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.117 [2024-07-15 18:46:29.664487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.117 [2024-07-15 18:46:29.664495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.117 [2024-07-15 18:46:29.664501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.117 [2024-07-15 18:46:29.664507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.117 [2024-07-15 18:46:29.664543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.117 [2024-07-15 18:46:29.664629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.117 [2024-07-15 18:46:29.664630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.682 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.682 [2024-07-15 18:46:30.388908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.940 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 Malloc0 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 [2024-07-15 18:46:30.448242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 [2024-07-15 18:46:30.456187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 Malloc1 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1158802 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1158802 /var/tmp/bdevperf.sock 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1158802 ']' 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.941 18:46:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 NVMe0n1 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.875 1 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 request: 00:21:14.875 { 00:21:14.875 "name": "NVMe0", 00:21:14.875 "trtype": "tcp", 00:21:14.875 "traddr": "10.0.0.2", 00:21:14.875 "adrfam": "ipv4", 00:21:14.875 "trsvcid": "4420", 00:21:14.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.875 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:14.875 "hostaddr": "10.0.0.2", 00:21:14.875 "hostsvcid": "60000", 00:21:14.875 "prchk_reftag": false, 00:21:14.875 "prchk_guard": false, 00:21:14.875 "hdgst": false, 00:21:14.875 "ddgst": false, 00:21:14.875 "method": "bdev_nvme_attach_controller", 00:21:14.875 "req_id": 1 00:21:14.875 } 00:21:14.875 Got JSON-RPC error response 00:21:14.875 response: 00:21:14.875 { 00:21:14.875 "code": -114, 00:21:14.875 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:14.875 } 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 request: 00:21:14.875 { 00:21:14.875 "name": "NVMe0", 00:21:14.875 "trtype": "tcp", 00:21:14.875 "traddr": "10.0.0.2", 00:21:14.875 "adrfam": "ipv4", 00:21:14.875 "trsvcid": "4420", 00:21:14.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:14.875 "hostaddr": "10.0.0.2", 00:21:14.875 "hostsvcid": "60000", 00:21:14.875 "prchk_reftag": false, 00:21:14.875 "prchk_guard": false, 00:21:14.875 "hdgst": false, 00:21:14.875 "ddgst": false, 00:21:14.875 "method": "bdev_nvme_attach_controller", 00:21:14.875 "req_id": 1 00:21:14.875 } 00:21:14.875 Got JSON-RPC error response 00:21:14.875 response: 00:21:14.875 { 00:21:14.875 "code": -114, 00:21:14.875 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:14.875 } 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 request: 00:21:14.875 { 00:21:14.875 "name": "NVMe0", 00:21:14.875 "trtype": "tcp", 00:21:14.875 "traddr": "10.0.0.2", 00:21:14.875 "adrfam": "ipv4", 00:21:14.875 "trsvcid": "4420", 00:21:14.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.875 "hostaddr": "10.0.0.2", 00:21:14.875 "hostsvcid": "60000", 00:21:14.875 "prchk_reftag": false, 00:21:14.875 "prchk_guard": false, 00:21:14.875 "hdgst": false, 00:21:14.875 "ddgst": false, 00:21:14.875 "multipath": "disable", 00:21:14.875 "method": "bdev_nvme_attach_controller", 00:21:14.875 "req_id": 1 00:21:14.875 } 00:21:14.875 Got JSON-RPC error response 00:21:14.875 response: 00:21:14.875 { 00:21:14.875 "code": -114, 00:21:14.875 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:14.875 } 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.875 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 request: 00:21:14.875 { 00:21:14.875 "name": "NVMe0", 00:21:14.875 "trtype": "tcp", 00:21:14.875 "traddr": "10.0.0.2", 00:21:14.875 "adrfam": "ipv4", 00:21:14.875 "trsvcid": "4420", 00:21:14.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.875 "hostaddr": "10.0.0.2", 00:21:14.875 "hostsvcid": "60000", 00:21:14.875 "prchk_reftag": false, 00:21:14.875 "prchk_guard": false, 00:21:14.875 "hdgst": false, 00:21:14.875 "ddgst": false, 00:21:14.875 "multipath": "failover", 00:21:14.875 "method": "bdev_nvme_attach_controller", 00:21:14.875 "req_id": 1 00:21:14.875 } 00:21:14.876 Got JSON-RPC error response 00:21:14.876 response: 00:21:14.876 { 00:21:14.876 "code": -114, 00:21:14.876 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:14.876 } 00:21:14.876 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.876 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:14.876 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.876 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.876 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.133 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.133 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.391 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:15.391 18:46:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.763 0 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1158802 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1158802 ']' 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1158802 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1158802 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1158802' 00:21:16.763 killing process with pid 1158802 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1158802 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1158802 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:16.763 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:16.763 [2024-07-15 18:46:30.557378] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:16.763 [2024-07-15 18:46:30.557430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158802 ] 00:21:16.763 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.763 [2024-07-15 18:46:30.610920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.763 [2024-07-15 18:46:30.685264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.763 [2024-07-15 18:46:31.930343] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name a0fb537c-fe5d-4e56-a7ae-f6b41e36d8a5 already exists 00:21:16.763 [2024-07-15 18:46:31.930371] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:a0fb537c-fe5d-4e56-a7ae-f6b41e36d8a5 alias for bdev NVMe1n1 00:21:16.763 [2024-07-15 18:46:31.930380] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:16.763 Running I/O for 1 seconds... 00:21:16.763 00:21:16.763 Latency(us) 00:21:16.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.763 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:16.763 NVMe0n1 : 1.01 23082.95 90.17 0.00 0.00 5527.35 5100.41 15386.71 00:21:16.763 =================================================================================================================== 00:21:16.763 Total : 23082.95 90.17 0.00 0.00 5527.35 5100.41 15386.71 00:21:16.763 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.763 00:21:16.763 Latency(us) 00:21:16.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.763 =================================================================================================================== 00:21:16.763 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.763 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:16.763 rmmod nvme_tcp 00:21:16.763 rmmod nvme_fabrics 00:21:16.763 rmmod nvme_keyring 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1158555 ']' 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1158555 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1158555 ']' 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1158555 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:16.763 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.764 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1158555 00:21:16.764 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.764 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.764 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1158555' 00:21:16.764 killing process with pid 1158555 00:21:16.764 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1158555 00:21:16.764 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1158555 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.021 18:46:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.553 18:46:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:19.553 00:21:19.553 real 0m11.342s 00:21:19.553 user 0m16.403s 00:21:19.553 sys 0m4.469s 00:21:19.553 18:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.553 18:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:19.553 ************************************ 00:21:19.553 END TEST nvmf_multicontroller 00:21:19.553 ************************************ 00:21:19.553 18:46:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:19.553 18:46:35 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:19.553 18:46:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:19.553 18:46:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.553 18:46:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:19.553 ************************************ 00:21:19.553 START TEST nvmf_aer 00:21:19.553 ************************************ 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:19.553 * Looking for test storage... 00:21:19.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:19.553 18:46:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:24.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:24.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:24.850 Found net devices under 0000:86:00.0: cvl_0_0 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:24.850 Found net devices under 0000:86:00.1: cvl_0_1 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.850 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:21:24.851 00:21:24.851 --- 10.0.0.2 ping statistics --- 00:21:24.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.851 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:21:24.851 00:21:24.851 --- 10.0.0.1 ping statistics --- 00:21:24.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.851 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1162772 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1162772 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1162772 ']' 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.851 18:46:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.110 [2024-07-15 18:46:41.593766] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:25.110 [2024-07-15 18:46:41.593809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.110 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.110 [2024-07-15 18:46:41.650662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.110 [2024-07-15 18:46:41.723913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.110 [2024-07-15 18:46:41.723955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.110 [2024-07-15 18:46:41.723962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.110 [2024-07-15 18:46:41.723968] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.110 [2024-07-15 18:46:41.723972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.110 [2024-07-15 18:46:41.724069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.110 [2024-07-15 18:46:41.724164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.110 [2024-07-15 18:46:41.724242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.110 [2024-07-15 18:46:41.724248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 [2024-07-15 18:46:42.437084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 Malloc0 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 [2024-07-15 18:46:42.480727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 [ 00:21:26.045 { 00:21:26.045 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:26.045 "subtype": "Discovery", 00:21:26.045 "listen_addresses": [], 00:21:26.045 "allow_any_host": true, 00:21:26.045 "hosts": [] 00:21:26.045 }, 00:21:26.045 { 00:21:26.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.045 "subtype": "NVMe", 00:21:26.045 "listen_addresses": [ 00:21:26.045 { 00:21:26.045 "trtype": "TCP", 00:21:26.045 "adrfam": "IPv4", 00:21:26.045 "traddr": "10.0.0.2", 00:21:26.045 "trsvcid": "4420" 00:21:26.045 } 00:21:26.045 ], 00:21:26.045 "allow_any_host": true, 00:21:26.045 "hosts": [], 00:21:26.045 "serial_number": "SPDK00000000000001", 00:21:26.045 "model_number": "SPDK bdev Controller", 00:21:26.045 "max_namespaces": 2, 00:21:26.045 "min_cntlid": 1, 00:21:26.045 "max_cntlid": 65519, 00:21:26.045 "namespaces": [ 00:21:26.045 { 00:21:26.045 "nsid": 1, 00:21:26.045 "bdev_name": "Malloc0", 00:21:26.045 "name": "Malloc0", 00:21:26.045 "nguid": "87705501DEB74DB69C2E02A43F523749", 00:21:26.045 "uuid": "87705501-deb7-4db6-9c2e-02a43f523749" 00:21:26.045 } 00:21:26.045 ] 00:21:26.045 } 00:21:26.045 ] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1162830 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:26.045 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.045 Malloc1 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.045 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.304 Asynchronous Event Request test 00:21:26.304 Attaching to 10.0.0.2 00:21:26.304 Attached to 10.0.0.2 00:21:26.304 Registering asynchronous event callbacks... 00:21:26.304 Starting namespace attribute notice tests for all controllers... 00:21:26.304 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:26.304 aer_cb - Changed Namespace 00:21:26.304 Cleaning up... 00:21:26.304 [ 00:21:26.304 { 00:21:26.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:26.304 "subtype": "Discovery", 00:21:26.304 "listen_addresses": [], 00:21:26.304 "allow_any_host": true, 00:21:26.304 "hosts": [] 00:21:26.304 }, 00:21:26.304 { 00:21:26.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.304 "subtype": "NVMe", 00:21:26.304 "listen_addresses": [ 00:21:26.304 { 00:21:26.304 "trtype": "TCP", 00:21:26.304 "adrfam": "IPv4", 00:21:26.304 "traddr": "10.0.0.2", 00:21:26.304 "trsvcid": "4420" 00:21:26.304 } 00:21:26.304 ], 00:21:26.304 "allow_any_host": true, 00:21:26.304 "hosts": [], 00:21:26.304 "serial_number": "SPDK00000000000001", 00:21:26.304 "model_number": "SPDK bdev Controller", 00:21:26.304 "max_namespaces": 2, 00:21:26.304 "min_cntlid": 1, 00:21:26.304 "max_cntlid": 65519, 00:21:26.304 "namespaces": [ 00:21:26.304 { 00:21:26.304 "nsid": 1, 00:21:26.304 "bdev_name": "Malloc0", 00:21:26.304 "name": "Malloc0", 00:21:26.304 "nguid": "87705501DEB74DB69C2E02A43F523749", 00:21:26.304 "uuid": "87705501-deb7-4db6-9c2e-02a43f523749" 00:21:26.304 }, 00:21:26.304 { 00:21:26.304 "nsid": 2, 00:21:26.304 "bdev_name": "Malloc1", 00:21:26.304 "name": "Malloc1", 00:21:26.304 "nguid": "A84C9E9EE520418F920ECC61E262C67B", 00:21:26.304 "uuid": "a84c9e9e-e520-418f-920e-cc61e262c67b" 00:21:26.304 } 00:21:26.304 ] 00:21:26.304 } 00:21:26.304 ] 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1162830 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:26.304 rmmod nvme_tcp 00:21:26.304 rmmod nvme_fabrics 00:21:26.304 rmmod nvme_keyring 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1162772 ']' 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1162772 00:21:26.304 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1162772 ']' 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1162772 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1162772 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1162772' 00:21:26.305 killing process with pid 1162772 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1162772 00:21:26.305 18:46:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1162772 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.563 18:46:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.463 18:46:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:28.463 00:21:28.463 real 0m9.363s 00:21:28.463 user 0m7.142s 00:21:28.463 sys 0m4.596s 00:21:28.463 18:46:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.463 18:46:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.463 ************************************ 00:21:28.463 END TEST nvmf_aer 00:21:28.463 ************************************ 00:21:28.722 18:46:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:28.722 18:46:45 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:28.722 18:46:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:28.722 18:46:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.722 18:46:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.722 ************************************ 00:21:28.722 START TEST nvmf_async_init 00:21:28.722 ************************************ 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:28.722 * Looking for test storage... 00:21:28.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.722 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a005cb88a0524295a693c9b0aa5c7883 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.723 18:46:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.992 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.992 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.992 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.993 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:21:33.993 00:21:33.993 --- 10.0.0.2 ping statistics --- 00:21:33.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.993 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:21:33.993 00:21:33.993 --- 10.0.0.1 ping statistics --- 00:21:33.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.993 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1166344 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1166344 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1166344 ']' 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.993 18:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:34.251 [2024-07-15 18:46:50.744031] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:34.251 [2024-07-15 18:46:50.744076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.251 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.251 [2024-07-15 18:46:50.801588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.251 [2024-07-15 18:46:50.880265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.251 [2024-07-15 18:46:50.880302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.251 [2024-07-15 18:46:50.880308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.251 [2024-07-15 18:46:50.880314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.251 [2024-07-15 18:46:50.880320] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.251 [2024-07-15 18:46:50.880344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 [2024-07-15 18:46:51.590294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 null0 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a005cb88a0524295a693c9b0aa5c7883 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 [2024-07-15 18:46:51.630468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 nvme0n1 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 [ 00:21:35.185 { 00:21:35.185 "name": "nvme0n1", 00:21:35.185 "aliases": [ 00:21:35.185 "a005cb88-a052-4295-a693-c9b0aa5c7883" 00:21:35.185 ], 00:21:35.185 "product_name": "NVMe disk", 00:21:35.185 "block_size": 512, 00:21:35.185 "num_blocks": 2097152, 00:21:35.185 "uuid": "a005cb88-a052-4295-a693-c9b0aa5c7883", 00:21:35.185 "assigned_rate_limits": { 00:21:35.185 "rw_ios_per_sec": 0, 00:21:35.185 "rw_mbytes_per_sec": 0, 00:21:35.185 "r_mbytes_per_sec": 0, 00:21:35.185 "w_mbytes_per_sec": 0 00:21:35.185 }, 00:21:35.185 "claimed": false, 00:21:35.185 "zoned": false, 00:21:35.185 "supported_io_types": { 00:21:35.185 "read": true, 00:21:35.185 "write": true, 00:21:35.185 "unmap": false, 00:21:35.185 "flush": true, 00:21:35.185 "reset": true, 00:21:35.185 "nvme_admin": true, 00:21:35.185 "nvme_io": true, 00:21:35.185 "nvme_io_md": false, 00:21:35.185 "write_zeroes": true, 00:21:35.185 "zcopy": false, 00:21:35.185 "get_zone_info": false, 00:21:35.185 "zone_management": false, 00:21:35.185 "zone_append": false, 00:21:35.185 "compare": true, 00:21:35.185 "compare_and_write": true, 00:21:35.185 "abort": true, 00:21:35.185 "seek_hole": false, 00:21:35.185 "seek_data": false, 00:21:35.185 "copy": true, 00:21:35.185 "nvme_iov_md": false 00:21:35.185 }, 00:21:35.185 "memory_domains": [ 00:21:35.185 { 00:21:35.185 "dma_device_id": "system", 00:21:35.185 "dma_device_type": 1 00:21:35.185 } 00:21:35.185 ], 00:21:35.185 "driver_specific": { 00:21:35.185 "nvme": [ 00:21:35.185 { 00:21:35.185 "trid": { 00:21:35.185 "trtype": "TCP", 00:21:35.185 "adrfam": "IPv4", 00:21:35.185 "traddr": "10.0.0.2", 00:21:35.185 "trsvcid": "4420", 00:21:35.185 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:35.185 }, 00:21:35.185 "ctrlr_data": { 00:21:35.185 "cntlid": 1, 00:21:35.185 "vendor_id": "0x8086", 00:21:35.185 "model_number": "SPDK bdev Controller", 00:21:35.185 "serial_number": "00000000000000000000", 00:21:35.185 "firmware_revision": "24.09", 00:21:35.185 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.185 "oacs": { 00:21:35.185 "security": 0, 00:21:35.185 "format": 0, 00:21:35.185 "firmware": 0, 00:21:35.185 "ns_manage": 0 00:21:35.185 }, 00:21:35.185 "multi_ctrlr": true, 00:21:35.185 "ana_reporting": false 00:21:35.185 }, 00:21:35.185 "vs": { 00:21:35.185 "nvme_version": "1.3" 00:21:35.185 }, 00:21:35.185 "ns_data": { 00:21:35.185 "id": 1, 00:21:35.185 "can_share": true 00:21:35.185 } 00:21:35.185 } 00:21:35.185 ], 00:21:35.185 "mp_policy": "active_passive" 00:21:35.185 } 00:21:35.185 } 00:21:35.185 ] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.185 18:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 [2024-07-15 18:46:51.887012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:35.185 [2024-07-15 18:46:51.887066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85a250 (9): Bad file descriptor 00:21:35.444 [2024-07-15 18:46:52.019316] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:35.444 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.444 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:35.444 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.444 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.444 [ 00:21:35.444 { 00:21:35.444 "name": "nvme0n1", 00:21:35.444 "aliases": [ 00:21:35.444 "a005cb88-a052-4295-a693-c9b0aa5c7883" 00:21:35.444 ], 00:21:35.444 "product_name": "NVMe disk", 00:21:35.444 "block_size": 512, 00:21:35.444 "num_blocks": 2097152, 00:21:35.444 "uuid": "a005cb88-a052-4295-a693-c9b0aa5c7883", 00:21:35.444 "assigned_rate_limits": { 00:21:35.444 "rw_ios_per_sec": 0, 00:21:35.444 "rw_mbytes_per_sec": 0, 00:21:35.444 "r_mbytes_per_sec": 0, 00:21:35.444 "w_mbytes_per_sec": 0 00:21:35.444 }, 00:21:35.444 "claimed": false, 00:21:35.444 "zoned": false, 00:21:35.444 "supported_io_types": { 00:21:35.444 "read": true, 00:21:35.444 "write": true, 00:21:35.444 "unmap": false, 00:21:35.444 "flush": true, 00:21:35.444 "reset": true, 00:21:35.444 "nvme_admin": true, 00:21:35.444 "nvme_io": true, 00:21:35.444 "nvme_io_md": false, 00:21:35.444 "write_zeroes": true, 00:21:35.444 "zcopy": false, 00:21:35.444 "get_zone_info": false, 00:21:35.444 "zone_management": false, 00:21:35.444 "zone_append": false, 00:21:35.444 "compare": true, 00:21:35.444 "compare_and_write": true, 00:21:35.444 "abort": true, 00:21:35.444 "seek_hole": false, 00:21:35.444 "seek_data": false, 00:21:35.444 "copy": true, 00:21:35.444 "nvme_iov_md": false 00:21:35.444 }, 00:21:35.444 "memory_domains": [ 00:21:35.444 { 00:21:35.444 "dma_device_id": "system", 00:21:35.444 "dma_device_type": 1 00:21:35.444 } 00:21:35.444 ], 00:21:35.444 "driver_specific": { 00:21:35.444 "nvme": [ 00:21:35.444 { 00:21:35.444 "trid": { 00:21:35.444 "trtype": "TCP", 00:21:35.444 "adrfam": "IPv4", 00:21:35.444 "traddr": "10.0.0.2", 00:21:35.444 "trsvcid": "4420", 00:21:35.444 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:35.444 }, 00:21:35.444 "ctrlr_data": { 00:21:35.444 "cntlid": 2, 00:21:35.445 "vendor_id": "0x8086", 00:21:35.445 "model_number": "SPDK bdev Controller", 00:21:35.445 "serial_number": "00000000000000000000", 00:21:35.445 "firmware_revision": "24.09", 00:21:35.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.445 "oacs": { 00:21:35.445 "security": 0, 00:21:35.445 "format": 0, 00:21:35.445 "firmware": 0, 00:21:35.445 "ns_manage": 0 00:21:35.445 }, 00:21:35.445 "multi_ctrlr": true, 00:21:35.445 "ana_reporting": false 00:21:35.445 }, 00:21:35.445 "vs": { 00:21:35.445 "nvme_version": "1.3" 00:21:35.445 }, 00:21:35.445 "ns_data": { 00:21:35.445 "id": 1, 00:21:35.445 "can_share": true 00:21:35.445 } 00:21:35.445 } 00:21:35.445 ], 00:21:35.445 "mp_policy": "active_passive" 00:21:35.445 } 00:21:35.445 } 00:21:35.445 ] 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bycAPjBFSP 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bycAPjBFSP 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 [2024-07-15 18:46:52.075583] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.445 [2024-07-15 18:46:52.075686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bycAPjBFSP 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 [2024-07-15 18:46:52.083604] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bycAPjBFSP 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.445 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 [2024-07-15 18:46:52.091637] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.445 [2024-07-15 18:46:52.091670] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:35.703 nvme0n1 00:21:35.703 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.703 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:35.703 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.703 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.703 [ 00:21:35.703 { 00:21:35.703 "name": "nvme0n1", 00:21:35.703 "aliases": [ 00:21:35.703 "a005cb88-a052-4295-a693-c9b0aa5c7883" 00:21:35.703 ], 00:21:35.703 "product_name": "NVMe disk", 00:21:35.703 "block_size": 512, 00:21:35.703 "num_blocks": 2097152, 00:21:35.703 "uuid": "a005cb88-a052-4295-a693-c9b0aa5c7883", 00:21:35.703 "assigned_rate_limits": { 00:21:35.703 "rw_ios_per_sec": 0, 00:21:35.703 "rw_mbytes_per_sec": 0, 00:21:35.703 "r_mbytes_per_sec": 0, 00:21:35.703 "w_mbytes_per_sec": 0 00:21:35.703 }, 00:21:35.703 "claimed": false, 00:21:35.703 "zoned": false, 00:21:35.703 "supported_io_types": { 00:21:35.703 "read": true, 00:21:35.703 "write": true, 00:21:35.703 "unmap": false, 00:21:35.703 "flush": true, 00:21:35.703 "reset": true, 00:21:35.703 "nvme_admin": true, 00:21:35.703 "nvme_io": true, 00:21:35.703 "nvme_io_md": false, 00:21:35.703 "write_zeroes": true, 00:21:35.703 "zcopy": false, 00:21:35.703 "get_zone_info": false, 00:21:35.703 "zone_management": false, 00:21:35.703 "zone_append": false, 00:21:35.703 "compare": true, 00:21:35.703 "compare_and_write": true, 00:21:35.703 "abort": true, 00:21:35.703 "seek_hole": false, 00:21:35.703 "seek_data": false, 00:21:35.703 "copy": true, 00:21:35.703 "nvme_iov_md": false 00:21:35.703 }, 00:21:35.703 "memory_domains": [ 00:21:35.703 { 00:21:35.703 "dma_device_id": "system", 00:21:35.703 "dma_device_type": 1 00:21:35.703 } 00:21:35.703 ], 00:21:35.703 "driver_specific": { 00:21:35.703 "nvme": [ 00:21:35.703 { 00:21:35.703 "trid": { 00:21:35.703 "trtype": "TCP", 00:21:35.703 "adrfam": "IPv4", 00:21:35.703 "traddr": "10.0.0.2", 00:21:35.703 "trsvcid": "4421", 00:21:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:35.703 }, 00:21:35.703 "ctrlr_data": { 00:21:35.703 "cntlid": 3, 00:21:35.703 "vendor_id": "0x8086", 00:21:35.703 "model_number": "SPDK bdev Controller", 00:21:35.703 "serial_number": "00000000000000000000", 00:21:35.703 "firmware_revision": "24.09", 00:21:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.703 "oacs": { 00:21:35.703 "security": 0, 00:21:35.703 "format": 0, 00:21:35.703 "firmware": 0, 00:21:35.703 "ns_manage": 0 00:21:35.703 }, 00:21:35.703 "multi_ctrlr": true, 00:21:35.703 "ana_reporting": false 00:21:35.703 }, 00:21:35.703 "vs": { 00:21:35.703 "nvme_version": "1.3" 00:21:35.703 }, 00:21:35.703 "ns_data": { 00:21:35.703 "id": 1, 00:21:35.703 "can_share": true 00:21:35.703 } 00:21:35.704 } 00:21:35.704 ], 00:21:35.704 "mp_policy": "active_passive" 00:21:35.704 } 00:21:35.704 } 00:21:35.704 ] 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.bycAPjBFSP 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.704 rmmod nvme_tcp 00:21:35.704 rmmod nvme_fabrics 00:21:35.704 rmmod nvme_keyring 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1166344 ']' 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1166344 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1166344 ']' 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1166344 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1166344 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1166344' 00:21:35.704 killing process with pid 1166344 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1166344 00:21:35.704 [2024-07-15 18:46:52.295074] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.704 [2024-07-15 18:46:52.295099] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.704 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1166344 00:21:35.961 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.961 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.961 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.961 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.961 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.961 18:46:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.961 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.962 18:46:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.861 18:46:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.861 00:21:37.861 real 0m9.292s 00:21:37.861 user 0m3.419s 00:21:37.861 sys 0m4.417s 00:21:37.861 18:46:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.861 18:46:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.861 ************************************ 00:21:37.861 END TEST nvmf_async_init 00:21:37.861 ************************************ 00:21:37.861 18:46:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:37.861 18:46:54 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:37.861 18:46:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:37.861 18:46:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.861 18:46:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:38.119 ************************************ 00:21:38.119 START TEST dma 00:21:38.119 ************************************ 00:21:38.119 18:46:54 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:38.119 * Looking for test storage... 00:21:38.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.119 18:46:54 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.119 18:46:54 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.119 18:46:54 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.119 18:46:54 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.119 18:46:54 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.119 18:46:54 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.119 18:46:54 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.119 18:46:54 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:38.119 18:46:54 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.119 18:46:54 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.119 18:46:54 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:38.119 18:46:54 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:38.119 00:21:38.119 real 0m0.113s 00:21:38.119 user 0m0.053s 00:21:38.119 sys 0m0.065s 00:21:38.119 18:46:54 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:38.119 18:46:54 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:38.119 ************************************ 00:21:38.119 END TEST dma 00:21:38.119 ************************************ 00:21:38.119 18:46:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:38.119 18:46:54 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:38.119 18:46:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:38.119 18:46:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.119 18:46:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:38.119 ************************************ 00:21:38.119 START TEST nvmf_identify 00:21:38.119 ************************************ 00:21:38.119 18:46:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:38.378 * Looking for test storage... 00:21:38.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.378 18:46:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:43.641 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:43.641 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:43.641 Found net devices under 0000:86:00.0: cvl_0_0 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.641 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:43.642 Found net devices under 0000:86:00.1: cvl_0_1 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:43.642 18:46:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:21:43.642 00:21:43.642 --- 10.0.0.2 ping statistics --- 00:21:43.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.642 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:21:43.642 00:21:43.642 --- 10.0.0.1 ping statistics --- 00:21:43.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.642 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1170150 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1170150 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1170150 ']' 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.642 18:47:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:43.642 [2024-07-15 18:47:00.305109] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:43.642 [2024-07-15 18:47:00.305151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.642 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.901 [2024-07-15 18:47:00.362319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.901 [2024-07-15 18:47:00.443654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.901 [2024-07-15 18:47:00.443690] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.901 [2024-07-15 18:47:00.443697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.901 [2024-07-15 18:47:00.443703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.901 [2024-07-15 18:47:00.443708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.901 [2024-07-15 18:47:00.443748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.901 [2024-07-15 18:47:00.443844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.901 [2024-07-15 18:47:00.443919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.901 [2024-07-15 18:47:00.443920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.467 [2024-07-15 18:47:01.122935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.467 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.729 Malloc0 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.729 [2024-07-15 18:47:01.211072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:44.729 [ 00:21:44.729 { 00:21:44.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:44.729 "subtype": "Discovery", 00:21:44.729 "listen_addresses": [ 00:21:44.729 { 00:21:44.729 "trtype": "TCP", 00:21:44.729 "adrfam": "IPv4", 00:21:44.729 "traddr": "10.0.0.2", 00:21:44.729 "trsvcid": "4420" 00:21:44.729 } 00:21:44.729 ], 00:21:44.729 "allow_any_host": true, 00:21:44.729 "hosts": [] 00:21:44.729 }, 00:21:44.729 { 00:21:44.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.729 "subtype": "NVMe", 00:21:44.729 "listen_addresses": [ 00:21:44.729 { 00:21:44.729 "trtype": "TCP", 00:21:44.729 "adrfam": "IPv4", 00:21:44.729 "traddr": "10.0.0.2", 00:21:44.729 "trsvcid": "4420" 00:21:44.729 } 00:21:44.729 ], 00:21:44.729 "allow_any_host": true, 00:21:44.729 "hosts": [], 00:21:44.729 "serial_number": "SPDK00000000000001", 00:21:44.729 "model_number": "SPDK bdev Controller", 00:21:44.729 "max_namespaces": 32, 00:21:44.729 "min_cntlid": 1, 00:21:44.729 "max_cntlid": 65519, 00:21:44.729 "namespaces": [ 00:21:44.729 { 00:21:44.729 "nsid": 1, 00:21:44.729 "bdev_name": "Malloc0", 00:21:44.729 "name": "Malloc0", 00:21:44.729 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:44.729 "eui64": "ABCDEF0123456789", 00:21:44.729 "uuid": "fabf6dac-e871-4704-aee6-9212b897b1b3" 00:21:44.729 } 00:21:44.729 ] 00:21:44.729 } 00:21:44.729 ] 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.729 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:44.729 [2024-07-15 18:47:01.263283] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:44.729 [2024-07-15 18:47:01.263317] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170278 ] 00:21:44.729 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.729 [2024-07-15 18:47:01.291804] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:44.729 [2024-07-15 18:47:01.291852] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:44.729 [2024-07-15 18:47:01.291860] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:44.729 [2024-07-15 18:47:01.291871] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:44.729 [2024-07-15 18:47:01.291877] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:44.729 [2024-07-15 18:47:01.292243] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:44.729 [2024-07-15 18:47:01.292273] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2211ec0 0 00:21:44.729 [2024-07-15 18:47:01.306239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:44.729 [2024-07-15 18:47:01.306250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:44.729 [2024-07-15 18:47:01.306254] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:44.729 [2024-07-15 18:47:01.306257] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:44.729 [2024-07-15 18:47:01.306290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.306295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.306299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.729 [2024-07-15 18:47:01.306310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:44.729 [2024-07-15 18:47:01.306325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.729 [2024-07-15 18:47:01.314235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.729 [2024-07-15 18:47:01.314244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.729 [2024-07-15 18:47:01.314248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.729 [2024-07-15 18:47:01.314261] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:44.729 [2024-07-15 18:47:01.314267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:44.729 [2024-07-15 18:47:01.314271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:44.729 [2024-07-15 18:47:01.314285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.729 [2024-07-15 18:47:01.314299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.729 [2024-07-15 18:47:01.314312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.729 [2024-07-15 18:47:01.314497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.729 [2024-07-15 18:47:01.314503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.729 [2024-07-15 18:47:01.314506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.729 [2024-07-15 18:47:01.314514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:44.729 [2024-07-15 18:47:01.314522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:44.729 [2024-07-15 18:47:01.314528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.729 [2024-07-15 18:47:01.314543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.729 [2024-07-15 18:47:01.314554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.729 [2024-07-15 18:47:01.314630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.729 [2024-07-15 18:47:01.314636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.729 [2024-07-15 18:47:01.314639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.729 [2024-07-15 18:47:01.314647] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:44.729 [2024-07-15 18:47:01.314654] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:44.729 [2024-07-15 18:47:01.314660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.729 [2024-07-15 18:47:01.314672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.729 [2024-07-15 18:47:01.314681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.729 [2024-07-15 18:47:01.314763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.729 [2024-07-15 18:47:01.314769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.729 [2024-07-15 18:47:01.314772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.729 [2024-07-15 18:47:01.314780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:44.729 [2024-07-15 18:47:01.314788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.729 [2024-07-15 18:47:01.314795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.314800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.730 [2024-07-15 18:47:01.314810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.730 [2024-07-15 18:47:01.314880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.730 [2024-07-15 18:47:01.314886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.730 [2024-07-15 18:47:01.314889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.314892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.730 [2024-07-15 18:47:01.314896] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:44.730 [2024-07-15 18:47:01.314900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:44.730 [2024-07-15 18:47:01.314907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:44.730 [2024-07-15 18:47:01.315011] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:44.730 [2024-07-15 18:47:01.315016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:44.730 [2024-07-15 18:47:01.315025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.730 [2024-07-15 18:47:01.315048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.730 [2024-07-15 18:47:01.315121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.730 [2024-07-15 18:47:01.315126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.730 [2024-07-15 18:47:01.315129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.730 [2024-07-15 18:47:01.315137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:44.730 [2024-07-15 18:47:01.315144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.730 [2024-07-15 18:47:01.315166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.730 [2024-07-15 18:47:01.315286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.730 [2024-07-15 18:47:01.315291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.730 [2024-07-15 18:47:01.315294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.730 [2024-07-15 18:47:01.315301] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:44.730 [2024-07-15 18:47:01.315305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:44.730 [2024-07-15 18:47:01.315313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:44.730 [2024-07-15 18:47:01.315320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:44.730 [2024-07-15 18:47:01.315328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.730 [2024-07-15 18:47:01.315348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.730 [2024-07-15 18:47:01.315467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.730 [2024-07-15 18:47:01.315473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.730 [2024-07-15 18:47:01.315476] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315480] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2211ec0): datao=0, datal=4096, cccid=0 00:21:44.730 [2024-07-15 18:47:01.315484] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2294e40) on tqpair(0x2211ec0): expected_datao=0, payload_size=4096 00:21:44.730 [2024-07-15 18:47:01.315488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315496] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315500] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.730 [2024-07-15 18:47:01.315553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.730 [2024-07-15 18:47:01.315556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.730 [2024-07-15 18:47:01.315566] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:44.730 [2024-07-15 18:47:01.315573] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:44.730 [2024-07-15 18:47:01.315577] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:44.730 [2024-07-15 18:47:01.315581] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:44.730 [2024-07-15 18:47:01.315585] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:44.730 [2024-07-15 18:47:01.315590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:44.730 [2024-07-15 18:47:01.315598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:44.730 [2024-07-15 18:47:01.315604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:44.730 [2024-07-15 18:47:01.315627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.730 [2024-07-15 18:47:01.315707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.730 [2024-07-15 18:47:01.315712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.730 [2024-07-15 18:47:01.315715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.730 [2024-07-15 18:47:01.315725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.730 [2024-07-15 18:47:01.315742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.730 [2024-07-15 18:47:01.315758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.730 [2024-07-15 18:47:01.315777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.730 [2024-07-15 18:47:01.315792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:44.730 [2024-07-15 18:47:01.315802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:44.730 [2024-07-15 18:47:01.315808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.730 [2024-07-15 18:47:01.315828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294e40, cid 0, qid 0 00:21:44.730 [2024-07-15 18:47:01.315833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2294fc0, cid 1, qid 0 00:21:44.730 [2024-07-15 18:47:01.315837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2295140, cid 2, qid 0 00:21:44.730 [2024-07-15 18:47:01.315841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.730 [2024-07-15 18:47:01.315845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2295440, cid 4, qid 0 00:21:44.730 [2024-07-15 18:47:01.315958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.730 [2024-07-15 18:47:01.315964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.730 [2024-07-15 18:47:01.315967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2295440) on tqpair=0x2211ec0 00:21:44.730 [2024-07-15 18:47:01.315975] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:44.730 [2024-07-15 18:47:01.315979] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:44.730 [2024-07-15 18:47:01.315988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.730 [2024-07-15 18:47:01.315992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2211ec0) 00:21:44.730 [2024-07-15 18:47:01.315997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.730 [2024-07-15 18:47:01.316007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2295440, cid 4, qid 0 00:21:44.730 [2024-07-15 18:47:01.316102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.730 [2024-07-15 18:47:01.316108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.731 [2024-07-15 18:47:01.316111] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316114] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2211ec0): datao=0, datal=4096, cccid=4 00:21:44.731 [2024-07-15 18:47:01.316118] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2295440) on tqpair(0x2211ec0): expected_datao=0, payload_size=4096 00:21:44.731 [2024-07-15 18:47:01.316121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316127] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316130] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.731 [2024-07-15 18:47:01.316186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.731 [2024-07-15 18:47:01.316189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2295440) on tqpair=0x2211ec0 00:21:44.731 [2024-07-15 18:47:01.316204] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:44.731 [2024-07-15 18:47:01.316231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2211ec0) 00:21:44.731 [2024-07-15 18:47:01.316241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.731 [2024-07-15 18:47:01.316247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2211ec0) 00:21:44.731 [2024-07-15 18:47:01.316258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.731 [2024-07-15 18:47:01.316271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2295440, cid 4, qid 0 00:21:44.731 [2024-07-15 18:47:01.316276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22955c0, cid 5, qid 0 00:21:44.731 [2024-07-15 18:47:01.316386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.731 [2024-07-15 18:47:01.316392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.731 [2024-07-15 18:47:01.316395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316398] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2211ec0): datao=0, datal=1024, cccid=4 00:21:44.731 [2024-07-15 18:47:01.316402] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2295440) on tqpair(0x2211ec0): expected_datao=0, payload_size=1024 00:21:44.731 [2024-07-15 18:47:01.316405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316411] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316414] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.731 [2024-07-15 18:47:01.316424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.731 [2024-07-15 18:47:01.316427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.316430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22955c0) on tqpair=0x2211ec0 00:21:44.731 [2024-07-15 18:47:01.358311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.731 [2024-07-15 18:47:01.358327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.731 [2024-07-15 18:47:01.358331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2295440) on tqpair=0x2211ec0 00:21:44.731 [2024-07-15 18:47:01.358354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2211ec0) 00:21:44.731 [2024-07-15 18:47:01.358366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.731 [2024-07-15 18:47:01.358382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2295440, cid 4, qid 0 00:21:44.731 [2024-07-15 18:47:01.358484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.731 [2024-07-15 18:47:01.358490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.731 [2024-07-15 18:47:01.358493] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2211ec0): datao=0, datal=3072, cccid=4 00:21:44.731 [2024-07-15 18:47:01.358503] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2295440) on tqpair(0x2211ec0): expected_datao=0, payload_size=3072 00:21:44.731 [2024-07-15 18:47:01.358507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358513] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358516] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.731 [2024-07-15 18:47:01.358574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.731 [2024-07-15 18:47:01.358577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2295440) on tqpair=0x2211ec0 00:21:44.731 [2024-07-15 18:47:01.358588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2211ec0) 00:21:44.731 [2024-07-15 18:47:01.358597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.731 [2024-07-15 18:47:01.358611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2295440, cid 4, qid 0 00:21:44.731 [2024-07-15 18:47:01.358725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.731 [2024-07-15 18:47:01.358730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.731 [2024-07-15 18:47:01.358733] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358736] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2211ec0): datao=0, datal=8, cccid=4 00:21:44.731 [2024-07-15 18:47:01.358740] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2295440) on tqpair(0x2211ec0): expected_datao=0, payload_size=8 00:21:44.731 [2024-07-15 18:47:01.358744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358749] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.358753] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.399434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.731 [2024-07-15 18:47:01.399444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.731 [2024-07-15 18:47:01.399447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.731 [2024-07-15 18:47:01.399451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2295440) on tqpair=0x2211ec0 00:21:44.731 ===================================================== 00:21:44.731 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:44.731 ===================================================== 00:21:44.731 Controller Capabilities/Features 00:21:44.731 ================================ 00:21:44.731 Vendor ID: 0000 00:21:44.731 Subsystem Vendor ID: 0000 00:21:44.731 Serial Number: .................... 00:21:44.731 Model Number: ........................................ 00:21:44.731 Firmware Version: 24.09 00:21:44.731 Recommended Arb Burst: 0 00:21:44.731 IEEE OUI Identifier: 00 00 00 00:21:44.731 Multi-path I/O 00:21:44.731 May have multiple subsystem ports: No 00:21:44.731 May have multiple controllers: No 00:21:44.731 Associated with SR-IOV VF: No 00:21:44.731 Max Data Transfer Size: 131072 00:21:44.731 Max Number of Namespaces: 0 00:21:44.731 Max Number of I/O Queues: 1024 00:21:44.731 NVMe Specification Version (VS): 1.3 00:21:44.731 NVMe Specification Version (Identify): 1.3 00:21:44.731 Maximum Queue Entries: 128 00:21:44.731 Contiguous Queues Required: Yes 00:21:44.731 Arbitration Mechanisms Supported 00:21:44.731 Weighted Round Robin: Not Supported 00:21:44.731 Vendor Specific: Not Supported 00:21:44.731 Reset Timeout: 15000 ms 00:21:44.731 Doorbell Stride: 4 bytes 00:21:44.731 NVM Subsystem Reset: Not Supported 00:21:44.731 Command Sets Supported 00:21:44.731 NVM Command Set: Supported 00:21:44.731 Boot Partition: Not Supported 00:21:44.731 Memory Page Size Minimum: 4096 bytes 00:21:44.731 Memory Page Size Maximum: 4096 bytes 00:21:44.731 Persistent Memory Region: Not Supported 00:21:44.731 Optional Asynchronous Events Supported 00:21:44.731 Namespace Attribute Notices: Not Supported 00:21:44.731 Firmware Activation Notices: Not Supported 00:21:44.731 ANA Change Notices: Not Supported 00:21:44.731 PLE Aggregate Log Change Notices: Not Supported 00:21:44.731 LBA Status Info Alert Notices: Not Supported 00:21:44.731 EGE Aggregate Log Change Notices: Not Supported 00:21:44.731 Normal NVM Subsystem Shutdown event: Not Supported 00:21:44.731 Zone Descriptor Change Notices: Not Supported 00:21:44.731 Discovery Log Change Notices: Supported 00:21:44.731 Controller Attributes 00:21:44.731 128-bit Host Identifier: Not Supported 00:21:44.731 Non-Operational Permissive Mode: Not Supported 00:21:44.731 NVM Sets: Not Supported 00:21:44.731 Read Recovery Levels: Not Supported 00:21:44.731 Endurance Groups: Not Supported 00:21:44.731 Predictable Latency Mode: Not Supported 00:21:44.731 Traffic Based Keep ALive: Not Supported 00:21:44.731 Namespace Granularity: Not Supported 00:21:44.731 SQ Associations: Not Supported 00:21:44.731 UUID List: Not Supported 00:21:44.731 Multi-Domain Subsystem: Not Supported 00:21:44.731 Fixed Capacity Management: Not Supported 00:21:44.731 Variable Capacity Management: Not Supported 00:21:44.731 Delete Endurance Group: Not Supported 00:21:44.731 Delete NVM Set: Not Supported 00:21:44.731 Extended LBA Formats Supported: Not Supported 00:21:44.731 Flexible Data Placement Supported: Not Supported 00:21:44.731 00:21:44.731 Controller Memory Buffer Support 00:21:44.731 ================================ 00:21:44.731 Supported: No 00:21:44.731 00:21:44.731 Persistent Memory Region Support 00:21:44.731 ================================ 00:21:44.731 Supported: No 00:21:44.731 00:21:44.732 Admin Command Set Attributes 00:21:44.732 ============================ 00:21:44.732 Security Send/Receive: Not Supported 00:21:44.732 Format NVM: Not Supported 00:21:44.732 Firmware Activate/Download: Not Supported 00:21:44.732 Namespace Management: Not Supported 00:21:44.732 Device Self-Test: Not Supported 00:21:44.732 Directives: Not Supported 00:21:44.732 NVMe-MI: Not Supported 00:21:44.732 Virtualization Management: Not Supported 00:21:44.732 Doorbell Buffer Config: Not Supported 00:21:44.732 Get LBA Status Capability: Not Supported 00:21:44.732 Command & Feature Lockdown Capability: Not Supported 00:21:44.732 Abort Command Limit: 1 00:21:44.732 Async Event Request Limit: 4 00:21:44.732 Number of Firmware Slots: N/A 00:21:44.732 Firmware Slot 1 Read-Only: N/A 00:21:44.732 Firmware Activation Without Reset: N/A 00:21:44.732 Multiple Update Detection Support: N/A 00:21:44.732 Firmware Update Granularity: No Information Provided 00:21:44.732 Per-Namespace SMART Log: No 00:21:44.732 Asymmetric Namespace Access Log Page: Not Supported 00:21:44.732 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:44.732 Command Effects Log Page: Not Supported 00:21:44.732 Get Log Page Extended Data: Supported 00:21:44.732 Telemetry Log Pages: Not Supported 00:21:44.732 Persistent Event Log Pages: Not Supported 00:21:44.732 Supported Log Pages Log Page: May Support 00:21:44.732 Commands Supported & Effects Log Page: Not Supported 00:21:44.732 Feature Identifiers & Effects Log Page:May Support 00:21:44.732 NVMe-MI Commands & Effects Log Page: May Support 00:21:44.732 Data Area 4 for Telemetry Log: Not Supported 00:21:44.732 Error Log Page Entries Supported: 128 00:21:44.732 Keep Alive: Not Supported 00:21:44.732 00:21:44.732 NVM Command Set Attributes 00:21:44.732 ========================== 00:21:44.732 Submission Queue Entry Size 00:21:44.732 Max: 1 00:21:44.732 Min: 1 00:21:44.732 Completion Queue Entry Size 00:21:44.732 Max: 1 00:21:44.732 Min: 1 00:21:44.732 Number of Namespaces: 0 00:21:44.732 Compare Command: Not Supported 00:21:44.732 Write Uncorrectable Command: Not Supported 00:21:44.732 Dataset Management Command: Not Supported 00:21:44.732 Write Zeroes Command: Not Supported 00:21:44.732 Set Features Save Field: Not Supported 00:21:44.732 Reservations: Not Supported 00:21:44.732 Timestamp: Not Supported 00:21:44.732 Copy: Not Supported 00:21:44.732 Volatile Write Cache: Not Present 00:21:44.732 Atomic Write Unit (Normal): 1 00:21:44.732 Atomic Write Unit (PFail): 1 00:21:44.732 Atomic Compare & Write Unit: 1 00:21:44.732 Fused Compare & Write: Supported 00:21:44.732 Scatter-Gather List 00:21:44.732 SGL Command Set: Supported 00:21:44.732 SGL Keyed: Supported 00:21:44.732 SGL Bit Bucket Descriptor: Not Supported 00:21:44.732 SGL Metadata Pointer: Not Supported 00:21:44.732 Oversized SGL: Not Supported 00:21:44.732 SGL Metadata Address: Not Supported 00:21:44.732 SGL Offset: Supported 00:21:44.732 Transport SGL Data Block: Not Supported 00:21:44.732 Replay Protected Memory Block: Not Supported 00:21:44.732 00:21:44.732 Firmware Slot Information 00:21:44.732 ========================= 00:21:44.732 Active slot: 0 00:21:44.732 00:21:44.732 00:21:44.732 Error Log 00:21:44.732 ========= 00:21:44.732 00:21:44.732 Active Namespaces 00:21:44.732 ================= 00:21:44.732 Discovery Log Page 00:21:44.732 ================== 00:21:44.732 Generation Counter: 2 00:21:44.732 Number of Records: 2 00:21:44.732 Record Format: 0 00:21:44.732 00:21:44.732 Discovery Log Entry 0 00:21:44.732 ---------------------- 00:21:44.732 Transport Type: 3 (TCP) 00:21:44.732 Address Family: 1 (IPv4) 00:21:44.732 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:44.732 Entry Flags: 00:21:44.732 Duplicate Returned Information: 1 00:21:44.732 Explicit Persistent Connection Support for Discovery: 1 00:21:44.732 Transport Requirements: 00:21:44.732 Secure Channel: Not Required 00:21:44.732 Port ID: 0 (0x0000) 00:21:44.732 Controller ID: 65535 (0xffff) 00:21:44.732 Admin Max SQ Size: 128 00:21:44.732 Transport Service Identifier: 4420 00:21:44.732 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:44.732 Transport Address: 10.0.0.2 00:21:44.732 Discovery Log Entry 1 00:21:44.732 ---------------------- 00:21:44.732 Transport Type: 3 (TCP) 00:21:44.732 Address Family: 1 (IPv4) 00:21:44.732 Subsystem Type: 2 (NVM Subsystem) 00:21:44.732 Entry Flags: 00:21:44.732 Duplicate Returned Information: 0 00:21:44.732 Explicit Persistent Connection Support for Discovery: 0 00:21:44.732 Transport Requirements: 00:21:44.732 Secure Channel: Not Required 00:21:44.732 Port ID: 0 (0x0000) 00:21:44.732 Controller ID: 65535 (0xffff) 00:21:44.732 Admin Max SQ Size: 128 00:21:44.732 Transport Service Identifier: 4420 00:21:44.732 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:44.732 Transport Address: 10.0.0.2 [2024-07-15 18:47:01.399532] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:44.732 [2024-07-15 18:47:01.399542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294e40) on tqpair=0x2211ec0 00:21:44.732 [2024-07-15 18:47:01.399547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.732 [2024-07-15 18:47:01.399552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2294fc0) on tqpair=0x2211ec0 00:21:44.732 [2024-07-15 18:47:01.399556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.732 [2024-07-15 18:47:01.399560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2295140) on tqpair=0x2211ec0 00:21:44.732 [2024-07-15 18:47:01.399563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.732 [2024-07-15 18:47:01.399568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.732 [2024-07-15 18:47:01.399572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.732 [2024-07-15 18:47:01.399581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.732 [2024-07-15 18:47:01.399596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.732 [2024-07-15 18:47:01.399609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.732 [2024-07-15 18:47:01.399681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.732 [2024-07-15 18:47:01.399687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.732 [2024-07-15 18:47:01.399690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.732 [2024-07-15 18:47:01.399699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.732 [2024-07-15 18:47:01.399711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.732 [2024-07-15 18:47:01.399725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.732 [2024-07-15 18:47:01.399811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.732 [2024-07-15 18:47:01.399817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.732 [2024-07-15 18:47:01.399820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.732 [2024-07-15 18:47:01.399827] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:44.732 [2024-07-15 18:47:01.399831] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:44.732 [2024-07-15 18:47:01.399839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.732 [2024-07-15 18:47:01.399852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.732 [2024-07-15 18:47:01.399861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.732 [2024-07-15 18:47:01.399970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.732 [2024-07-15 18:47:01.399975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.732 [2024-07-15 18:47:01.399978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.732 [2024-07-15 18:47:01.399991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.732 [2024-07-15 18:47:01.399997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.732 [2024-07-15 18:47:01.400003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.732 [2024-07-15 18:47:01.400012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.732 [2024-07-15 18:47:01.400084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.732 [2024-07-15 18:47:01.400090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.732 [2024-07-15 18:47:01.400093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.733 [2024-07-15 18:47:01.400106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.733 [2024-07-15 18:47:01.400118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.733 [2024-07-15 18:47:01.400127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.733 [2024-07-15 18:47:01.400243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.733 [2024-07-15 18:47:01.400249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.733 [2024-07-15 18:47:01.400252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.733 [2024-07-15 18:47:01.400264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.733 [2024-07-15 18:47:01.400276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.733 [2024-07-15 18:47:01.400286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.733 [2024-07-15 18:47:01.400359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.733 [2024-07-15 18:47:01.400365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.733 [2024-07-15 18:47:01.400368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.733 [2024-07-15 18:47:01.400379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.733 [2024-07-15 18:47:01.400392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.733 [2024-07-15 18:47:01.400401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.733 [2024-07-15 18:47:01.400473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.733 [2024-07-15 18:47:01.400479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.733 [2024-07-15 18:47:01.400481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.733 [2024-07-15 18:47:01.400493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.400499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.733 [2024-07-15 18:47:01.400505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.733 [2024-07-15 18:47:01.400514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.733 [2024-07-15 18:47:01.404232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.733 [2024-07-15 18:47:01.404241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.733 [2024-07-15 18:47:01.404244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.404248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.733 [2024-07-15 18:47:01.404261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.404265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.404268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2211ec0) 00:21:44.733 [2024-07-15 18:47:01.404274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.733 [2024-07-15 18:47:01.404285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22952c0, cid 3, qid 0 00:21:44.733 [2024-07-15 18:47:01.404425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.733 [2024-07-15 18:47:01.404431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.733 [2024-07-15 18:47:01.404434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.733 [2024-07-15 18:47:01.404437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22952c0) on tqpair=0x2211ec0 00:21:44.733 [2024-07-15 18:47:01.404443] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:21:44.733 00:21:44.733 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:44.995 [2024-07-15 18:47:01.440338] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:44.995 [2024-07-15 18:47:01.440378] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170376 ] 00:21:44.995 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.995 [2024-07-15 18:47:01.469477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:44.995 [2024-07-15 18:47:01.469520] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:44.995 [2024-07-15 18:47:01.469525] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:44.995 [2024-07-15 18:47:01.469534] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:44.995 [2024-07-15 18:47:01.469540] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:44.995 [2024-07-15 18:47:01.469878] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:44.995 [2024-07-15 18:47:01.469900] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1947ec0 0 00:21:44.995 [2024-07-15 18:47:01.476232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:44.995 [2024-07-15 18:47:01.476241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:44.995 [2024-07-15 18:47:01.476245] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:44.995 [2024-07-15 18:47:01.476248] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:44.995 [2024-07-15 18:47:01.476273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.476278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.476282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.995 [2024-07-15 18:47:01.476291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:44.995 [2024-07-15 18:47:01.476306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.995 [2024-07-15 18:47:01.484236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.995 [2024-07-15 18:47:01.484247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.995 [2024-07-15 18:47:01.484250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.995 [2024-07-15 18:47:01.484262] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:44.995 [2024-07-15 18:47:01.484268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:44.995 [2024-07-15 18:47:01.484272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:44.995 [2024-07-15 18:47:01.484282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.995 [2024-07-15 18:47:01.484297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.995 [2024-07-15 18:47:01.484310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.995 [2024-07-15 18:47:01.484457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.995 [2024-07-15 18:47:01.484464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.995 [2024-07-15 18:47:01.484467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.995 [2024-07-15 18:47:01.484474] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:44.995 [2024-07-15 18:47:01.484480] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:44.995 [2024-07-15 18:47:01.484487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.995 [2024-07-15 18:47:01.484500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.995 [2024-07-15 18:47:01.484510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.995 [2024-07-15 18:47:01.484581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.995 [2024-07-15 18:47:01.484587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.995 [2024-07-15 18:47:01.484590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.995 [2024-07-15 18:47:01.484598] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:44.995 [2024-07-15 18:47:01.484605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:44.995 [2024-07-15 18:47:01.484611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.995 [2024-07-15 18:47:01.484623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.995 [2024-07-15 18:47:01.484632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.995 [2024-07-15 18:47:01.484704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.995 [2024-07-15 18:47:01.484710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.995 [2024-07-15 18:47:01.484715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.995 [2024-07-15 18:47:01.484723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:44.995 [2024-07-15 18:47:01.484731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.995 [2024-07-15 18:47:01.484743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.995 [2024-07-15 18:47:01.484753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.995 [2024-07-15 18:47:01.484826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.995 [2024-07-15 18:47:01.484831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.995 [2024-07-15 18:47:01.484834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.995 [2024-07-15 18:47:01.484841] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:44.995 [2024-07-15 18:47:01.484845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:44.995 [2024-07-15 18:47:01.484851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:44.995 [2024-07-15 18:47:01.484956] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:44.995 [2024-07-15 18:47:01.484960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:44.995 [2024-07-15 18:47:01.484966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.995 [2024-07-15 18:47:01.484970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.484973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.484978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.996 [2024-07-15 18:47:01.484988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.996 [2024-07-15 18:47:01.485063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.996 [2024-07-15 18:47:01.485069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.996 [2024-07-15 18:47:01.485072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.996 [2024-07-15 18:47:01.485079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:44.996 [2024-07-15 18:47:01.485087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.996 [2024-07-15 18:47:01.485108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.996 [2024-07-15 18:47:01.485182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.996 [2024-07-15 18:47:01.485187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.996 [2024-07-15 18:47:01.485192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.996 [2024-07-15 18:47:01.485199] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:44.996 [2024-07-15 18:47:01.485203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485209] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:44.996 [2024-07-15 18:47:01.485220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485236] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.996 [2024-07-15 18:47:01.485252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.996 [2024-07-15 18:47:01.485355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.996 [2024-07-15 18:47:01.485360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.996 [2024-07-15 18:47:01.485363] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485367] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=4096, cccid=0 00:21:44.996 [2024-07-15 18:47:01.485371] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cae40) on tqpair(0x1947ec0): expected_datao=0, payload_size=4096 00:21:44.996 [2024-07-15 18:47:01.485374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485402] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485407] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.996 [2024-07-15 18:47:01.485462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.996 [2024-07-15 18:47:01.485465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.996 [2024-07-15 18:47:01.485475] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:44.996 [2024-07-15 18:47:01.485481] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:44.996 [2024-07-15 18:47:01.485486] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:44.996 [2024-07-15 18:47:01.485489] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:44.996 [2024-07-15 18:47:01.485493] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:44.996 [2024-07-15 18:47:01.485497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:44.996 [2024-07-15 18:47:01.485535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.996 [2024-07-15 18:47:01.485627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.996 [2024-07-15 18:47:01.485633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.996 [2024-07-15 18:47:01.485636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.996 [2024-07-15 18:47:01.485645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.996 [2024-07-15 18:47:01.485662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.996 [2024-07-15 18:47:01.485678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.996 [2024-07-15 18:47:01.485694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.996 [2024-07-15 18:47:01.485709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.996 [2024-07-15 18:47:01.485743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cae40, cid 0, qid 0 00:21:44.996 [2024-07-15 18:47:01.485748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cafc0, cid 1, qid 0 00:21:44.996 [2024-07-15 18:47:01.485752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb140, cid 2, qid 0 00:21:44.996 [2024-07-15 18:47:01.485756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb2c0, cid 3, qid 0 00:21:44.996 [2024-07-15 18:47:01.485760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb440, cid 4, qid 0 00:21:44.996 [2024-07-15 18:47:01.485867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.996 [2024-07-15 18:47:01.485873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.996 [2024-07-15 18:47:01.485877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb440) on tqpair=0x1947ec0 00:21:44.996 [2024-07-15 18:47:01.485884] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:44.996 [2024-07-15 18:47:01.485889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.485906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485910] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.485913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.485918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:44.996 [2024-07-15 18:47:01.485928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb440, cid 4, qid 0 00:21:44.996 [2024-07-15 18:47:01.486005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.996 [2024-07-15 18:47:01.486010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.996 [2024-07-15 18:47:01.486014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.486017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb440) on tqpair=0x1947ec0 00:21:44.996 [2024-07-15 18:47:01.486066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.486075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:44.996 [2024-07-15 18:47:01.486081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.996 [2024-07-15 18:47:01.486085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1947ec0) 00:21:44.996 [2024-07-15 18:47:01.486090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.996 [2024-07-15 18:47:01.486100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb440, cid 4, qid 0 00:21:44.996 [2024-07-15 18:47:01.486185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.997 [2024-07-15 18:47:01.486190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.997 [2024-07-15 18:47:01.486194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.486197] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=4096, cccid=4 00:21:44.997 [2024-07-15 18:47:01.486200] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cb440) on tqpair(0x1947ec0): expected_datao=0, payload_size=4096 00:21:44.997 [2024-07-15 18:47:01.486204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.486232] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.486237] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.531243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.531246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb440) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.531260] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:44.997 [2024-07-15 18:47:01.531271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.531297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.531308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb440, cid 4, qid 0 00:21:44.997 [2024-07-15 18:47:01.531481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.997 [2024-07-15 18:47:01.531487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.997 [2024-07-15 18:47:01.531490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531493] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=4096, cccid=4 00:21:44.997 [2024-07-15 18:47:01.531497] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cb440) on tqpair(0x1947ec0): expected_datao=0, payload_size=4096 00:21:44.997 [2024-07-15 18:47:01.531501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531550] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531554] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.531635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.531638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb440) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.531652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.531676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.531687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb440, cid 4, qid 0 00:21:44.997 [2024-07-15 18:47:01.531769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.997 [2024-07-15 18:47:01.531775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.997 [2024-07-15 18:47:01.531778] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531781] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=4096, cccid=4 00:21:44.997 [2024-07-15 18:47:01.531784] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cb440) on tqpair(0x1947ec0): expected_datao=0, payload_size=4096 00:21:44.997 [2024-07-15 18:47:01.531788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531815] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531819] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.531887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.531894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb440) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.531903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531935] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:44.997 [2024-07-15 18:47:01.531939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:44.997 [2024-07-15 18:47:01.531944] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:44.997 [2024-07-15 18:47:01.531956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.531966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.531972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.531978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.531983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.997 [2024-07-15 18:47:01.531995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb440, cid 4, qid 0 00:21:44.997 [2024-07-15 18:47:01.532000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb5c0, cid 5, qid 0 00:21:44.997 [2024-07-15 18:47:01.532089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.532094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.532097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb440) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.532106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.532111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.532114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb5c0) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.532126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.532135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.532146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb5c0, cid 5, qid 0 00:21:44.997 [2024-07-15 18:47:01.532235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.532241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.532244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb5c0) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.532255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.532264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.532273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb5c0, cid 5, qid 0 00:21:44.997 [2024-07-15 18:47:01.532416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.532421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.532424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb5c0) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.532436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.532445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.532455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb5c0, cid 5, qid 0 00:21:44.997 [2024-07-15 18:47:01.532581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.997 [2024-07-15 18:47:01.532586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.997 [2024-07-15 18:47:01.532589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb5c0) on tqpair=0x1947ec0 00:21:44.997 [2024-07-15 18:47:01.532604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.532614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.532620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1947ec0) 00:21:44.997 [2024-07-15 18:47:01.532628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.997 [2024-07-15 18:47:01.532634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.997 [2024-07-15 18:47:01.532638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1947ec0) 00:21:44.998 [2024-07-15 18:47:01.532643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.998 [2024-07-15 18:47:01.532649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1947ec0) 00:21:44.998 [2024-07-15 18:47:01.532657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.998 [2024-07-15 18:47:01.532668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb5c0, cid 5, qid 0 00:21:44.998 [2024-07-15 18:47:01.532674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb440, cid 4, qid 0 00:21:44.998 [2024-07-15 18:47:01.532678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb740, cid 6, qid 0 00:21:44.998 [2024-07-15 18:47:01.532682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb8c0, cid 7, qid 0 00:21:44.998 [2024-07-15 18:47:01.532847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.998 [2024-07-15 18:47:01.532853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.998 [2024-07-15 18:47:01.532856] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532859] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=8192, cccid=5 00:21:44.998 [2024-07-15 18:47:01.532863] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cb5c0) on tqpair(0x1947ec0): expected_datao=0, payload_size=8192 00:21:44.998 [2024-07-15 18:47:01.532867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532873] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532876] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.998 [2024-07-15 18:47:01.532886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.998 [2024-07-15 18:47:01.532889] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532892] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=512, cccid=4 00:21:44.998 [2024-07-15 18:47:01.532895] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cb440) on tqpair(0x1947ec0): expected_datao=0, payload_size=512 00:21:44.998 [2024-07-15 18:47:01.532899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532904] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532907] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.998 [2024-07-15 18:47:01.532917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.998 [2024-07-15 18:47:01.532920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532923] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=512, cccid=6 00:21:44.998 [2024-07-15 18:47:01.532926] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cb740) on tqpair(0x1947ec0): expected_datao=0, payload_size=512 00:21:44.998 [2024-07-15 18:47:01.532930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532935] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532938] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:44.998 [2024-07-15 18:47:01.532948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:44.998 [2024-07-15 18:47:01.532951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532954] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1947ec0): datao=0, datal=4096, cccid=7 00:21:44.998 [2024-07-15 18:47:01.532957] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19cb8c0) on tqpair(0x1947ec0): expected_datao=0, payload_size=4096 00:21:44.998 [2024-07-15 18:47:01.532961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532973] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.532976] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.574430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.998 [2024-07-15 18:47:01.574444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.998 [2024-07-15 18:47:01.574450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.574454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb5c0) on tqpair=0x1947ec0 00:21:44.998 [2024-07-15 18:47:01.574467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.998 [2024-07-15 18:47:01.574472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.998 [2024-07-15 18:47:01.574475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.574479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb440) on tqpair=0x1947ec0 00:21:44.998 [2024-07-15 18:47:01.574487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.998 [2024-07-15 18:47:01.574492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.998 [2024-07-15 18:47:01.574495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.574498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb740) on tqpair=0x1947ec0 00:21:44.998 [2024-07-15 18:47:01.574504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.998 [2024-07-15 18:47:01.574509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.998 [2024-07-15 18:47:01.574512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.998 [2024-07-15 18:47:01.574515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb8c0) on tqpair=0x1947ec0 00:21:44.998 ===================================================== 00:21:44.998 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.998 ===================================================== 00:21:44.998 Controller Capabilities/Features 00:21:44.998 ================================ 00:21:44.998 Vendor ID: 8086 00:21:44.998 Subsystem Vendor ID: 8086 00:21:44.998 Serial Number: SPDK00000000000001 00:21:44.998 Model Number: SPDK bdev Controller 00:21:44.998 Firmware Version: 24.09 00:21:44.998 Recommended Arb Burst: 6 00:21:44.998 IEEE OUI Identifier: e4 d2 5c 00:21:44.998 Multi-path I/O 00:21:44.998 May have multiple subsystem ports: Yes 00:21:44.998 May have multiple controllers: Yes 00:21:44.998 Associated with SR-IOV VF: No 00:21:44.998 Max Data Transfer Size: 131072 00:21:44.998 Max Number of Namespaces: 32 00:21:44.998 Max Number of I/O Queues: 127 00:21:44.998 NVMe Specification Version (VS): 1.3 00:21:44.998 NVMe Specification Version (Identify): 1.3 00:21:44.998 Maximum Queue Entries: 128 00:21:44.998 Contiguous Queues Required: Yes 00:21:44.998 Arbitration Mechanisms Supported 00:21:44.998 Weighted Round Robin: Not Supported 00:21:44.998 Vendor Specific: Not Supported 00:21:44.998 Reset Timeout: 15000 ms 00:21:44.998 Doorbell Stride: 4 bytes 00:21:44.998 NVM Subsystem Reset: Not Supported 00:21:44.998 Command Sets Supported 00:21:44.998 NVM Command Set: Supported 00:21:44.998 Boot Partition: Not Supported 00:21:44.998 Memory Page Size Minimum: 4096 bytes 00:21:44.998 Memory Page Size Maximum: 4096 bytes 00:21:44.998 Persistent Memory Region: Not Supported 00:21:44.998 Optional Asynchronous Events Supported 00:21:44.998 Namespace Attribute Notices: Supported 00:21:44.998 Firmware Activation Notices: Not Supported 00:21:44.998 ANA Change Notices: Not Supported 00:21:44.998 PLE Aggregate Log Change Notices: Not Supported 00:21:44.998 LBA Status Info Alert Notices: Not Supported 00:21:44.998 EGE Aggregate Log Change Notices: Not Supported 00:21:44.998 Normal NVM Subsystem Shutdown event: Not Supported 00:21:44.998 Zone Descriptor Change Notices: Not Supported 00:21:44.998 Discovery Log Change Notices: Not Supported 00:21:44.998 Controller Attributes 00:21:44.998 128-bit Host Identifier: Supported 00:21:44.998 Non-Operational Permissive Mode: Not Supported 00:21:44.998 NVM Sets: Not Supported 00:21:44.998 Read Recovery Levels: Not Supported 00:21:44.998 Endurance Groups: Not Supported 00:21:44.998 Predictable Latency Mode: Not Supported 00:21:44.998 Traffic Based Keep ALive: Not Supported 00:21:44.998 Namespace Granularity: Not Supported 00:21:44.998 SQ Associations: Not Supported 00:21:44.998 UUID List: Not Supported 00:21:44.998 Multi-Domain Subsystem: Not Supported 00:21:44.998 Fixed Capacity Management: Not Supported 00:21:44.998 Variable Capacity Management: Not Supported 00:21:44.998 Delete Endurance Group: Not Supported 00:21:44.998 Delete NVM Set: Not Supported 00:21:44.998 Extended LBA Formats Supported: Not Supported 00:21:44.998 Flexible Data Placement Supported: Not Supported 00:21:44.998 00:21:44.998 Controller Memory Buffer Support 00:21:44.998 ================================ 00:21:44.998 Supported: No 00:21:44.998 00:21:44.998 Persistent Memory Region Support 00:21:44.998 ================================ 00:21:44.998 Supported: No 00:21:44.998 00:21:44.998 Admin Command Set Attributes 00:21:44.998 ============================ 00:21:44.998 Security Send/Receive: Not Supported 00:21:44.998 Format NVM: Not Supported 00:21:44.998 Firmware Activate/Download: Not Supported 00:21:44.998 Namespace Management: Not Supported 00:21:44.998 Device Self-Test: Not Supported 00:21:44.998 Directives: Not Supported 00:21:44.998 NVMe-MI: Not Supported 00:21:44.998 Virtualization Management: Not Supported 00:21:44.998 Doorbell Buffer Config: Not Supported 00:21:44.998 Get LBA Status Capability: Not Supported 00:21:44.998 Command & Feature Lockdown Capability: Not Supported 00:21:44.998 Abort Command Limit: 4 00:21:44.998 Async Event Request Limit: 4 00:21:44.998 Number of Firmware Slots: N/A 00:21:44.998 Firmware Slot 1 Read-Only: N/A 00:21:44.998 Firmware Activation Without Reset: N/A 00:21:44.998 Multiple Update Detection Support: N/A 00:21:44.998 Firmware Update Granularity: No Information Provided 00:21:44.998 Per-Namespace SMART Log: No 00:21:44.998 Asymmetric Namespace Access Log Page: Not Supported 00:21:44.998 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:44.998 Command Effects Log Page: Supported 00:21:44.998 Get Log Page Extended Data: Supported 00:21:44.998 Telemetry Log Pages: Not Supported 00:21:44.998 Persistent Event Log Pages: Not Supported 00:21:44.998 Supported Log Pages Log Page: May Support 00:21:44.998 Commands Supported & Effects Log Page: Not Supported 00:21:44.998 Feature Identifiers & Effects Log Page:May Support 00:21:44.999 NVMe-MI Commands & Effects Log Page: May Support 00:21:44.999 Data Area 4 for Telemetry Log: Not Supported 00:21:44.999 Error Log Page Entries Supported: 128 00:21:44.999 Keep Alive: Supported 00:21:44.999 Keep Alive Granularity: 10000 ms 00:21:44.999 00:21:44.999 NVM Command Set Attributes 00:21:44.999 ========================== 00:21:44.999 Submission Queue Entry Size 00:21:44.999 Max: 64 00:21:44.999 Min: 64 00:21:44.999 Completion Queue Entry Size 00:21:44.999 Max: 16 00:21:44.999 Min: 16 00:21:44.999 Number of Namespaces: 32 00:21:44.999 Compare Command: Supported 00:21:44.999 Write Uncorrectable Command: Not Supported 00:21:44.999 Dataset Management Command: Supported 00:21:44.999 Write Zeroes Command: Supported 00:21:44.999 Set Features Save Field: Not Supported 00:21:44.999 Reservations: Supported 00:21:44.999 Timestamp: Not Supported 00:21:44.999 Copy: Supported 00:21:44.999 Volatile Write Cache: Present 00:21:44.999 Atomic Write Unit (Normal): 1 00:21:44.999 Atomic Write Unit (PFail): 1 00:21:44.999 Atomic Compare & Write Unit: 1 00:21:44.999 Fused Compare & Write: Supported 00:21:44.999 Scatter-Gather List 00:21:44.999 SGL Command Set: Supported 00:21:44.999 SGL Keyed: Supported 00:21:44.999 SGL Bit Bucket Descriptor: Not Supported 00:21:44.999 SGL Metadata Pointer: Not Supported 00:21:44.999 Oversized SGL: Not Supported 00:21:44.999 SGL Metadata Address: Not Supported 00:21:44.999 SGL Offset: Supported 00:21:44.999 Transport SGL Data Block: Not Supported 00:21:44.999 Replay Protected Memory Block: Not Supported 00:21:44.999 00:21:44.999 Firmware Slot Information 00:21:44.999 ========================= 00:21:44.999 Active slot: 1 00:21:44.999 Slot 1 Firmware Revision: 24.09 00:21:44.999 00:21:44.999 00:21:44.999 Commands Supported and Effects 00:21:44.999 ============================== 00:21:44.999 Admin Commands 00:21:44.999 -------------- 00:21:44.999 Get Log Page (02h): Supported 00:21:44.999 Identify (06h): Supported 00:21:44.999 Abort (08h): Supported 00:21:44.999 Set Features (09h): Supported 00:21:44.999 Get Features (0Ah): Supported 00:21:44.999 Asynchronous Event Request (0Ch): Supported 00:21:44.999 Keep Alive (18h): Supported 00:21:44.999 I/O Commands 00:21:44.999 ------------ 00:21:44.999 Flush (00h): Supported LBA-Change 00:21:44.999 Write (01h): Supported LBA-Change 00:21:44.999 Read (02h): Supported 00:21:44.999 Compare (05h): Supported 00:21:44.999 Write Zeroes (08h): Supported LBA-Change 00:21:44.999 Dataset Management (09h): Supported LBA-Change 00:21:44.999 Copy (19h): Supported LBA-Change 00:21:44.999 00:21:44.999 Error Log 00:21:44.999 ========= 00:21:44.999 00:21:44.999 Arbitration 00:21:44.999 =========== 00:21:44.999 Arbitration Burst: 1 00:21:44.999 00:21:44.999 Power Management 00:21:44.999 ================ 00:21:44.999 Number of Power States: 1 00:21:44.999 Current Power State: Power State #0 00:21:44.999 Power State #0: 00:21:44.999 Max Power: 0.00 W 00:21:44.999 Non-Operational State: Operational 00:21:44.999 Entry Latency: Not Reported 00:21:44.999 Exit Latency: Not Reported 00:21:44.999 Relative Read Throughput: 0 00:21:44.999 Relative Read Latency: 0 00:21:44.999 Relative Write Throughput: 0 00:21:44.999 Relative Write Latency: 0 00:21:44.999 Idle Power: Not Reported 00:21:44.999 Active Power: Not Reported 00:21:44.999 Non-Operational Permissive Mode: Not Supported 00:21:44.999 00:21:44.999 Health Information 00:21:44.999 ================== 00:21:44.999 Critical Warnings: 00:21:44.999 Available Spare Space: OK 00:21:44.999 Temperature: OK 00:21:44.999 Device Reliability: OK 00:21:44.999 Read Only: No 00:21:44.999 Volatile Memory Backup: OK 00:21:44.999 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:44.999 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:44.999 Available Spare: 0% 00:21:44.999 Available Spare Threshold: 0% 00:21:44.999 Life Percentage Used:[2024-07-15 18:47:01.574604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.999 [2024-07-15 18:47:01.574608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1947ec0) 00:21:44.999 [2024-07-15 18:47:01.574615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.999 [2024-07-15 18:47:01.574628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb8c0, cid 7, qid 0 00:21:44.999 [2024-07-15 18:47:01.574708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.999 [2024-07-15 18:47:01.574714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.999 [2024-07-15 18:47:01.574717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.999 [2024-07-15 18:47:01.574720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb8c0) on tqpair=0x1947ec0 00:21:44.999 [2024-07-15 18:47:01.574748] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:44.999 [2024-07-15 18:47:01.574757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cae40) on tqpair=0x1947ec0 00:21:44.999 [2024-07-15 18:47:01.574763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.999 [2024-07-15 18:47:01.574767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cafc0) on tqpair=0x1947ec0 00:21:44.999 [2024-07-15 18:47:01.574771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.999 [2024-07-15 18:47:01.574775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb140) on tqpair=0x1947ec0 00:21:44.999 [2024-07-15 18:47:01.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.999 [2024-07-15 18:47:01.574784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb2c0) on tqpair=0x1947ec0 00:21:44.999 [2024-07-15 18:47:01.574787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.999 [2024-07-15 18:47:01.574794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.999 [2024-07-15 18:47:01.574798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.999 [2024-07-15 18:47:01.574800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1947ec0) 00:21:44.999 [2024-07-15 18:47:01.574807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.999 [2024-07-15 18:47:01.574819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb2c0, cid 3, qid 0 00:21:44.999 [2024-07-15 18:47:01.574893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:44.999 [2024-07-15 18:47:01.574899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:44.999 [2024-07-15 18:47:01.574901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:44.999 [2024-07-15 18:47:01.574905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb2c0) on tqpair=0x1947ec0 00:21:44.999 [2024-07-15 18:47:01.574911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:44.999 [2024-07-15 18:47:01.574914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:44.999 [2024-07-15 18:47:01.574917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1947ec0) 00:21:45.000 [2024-07-15 18:47:01.574922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.000 [2024-07-15 18:47:01.574934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb2c0, cid 3, qid 0 00:21:45.000 [2024-07-15 18:47:01.575058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.000 [2024-07-15 18:47:01.575064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.000 [2024-07-15 18:47:01.575067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.000 [2024-07-15 18:47:01.575070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb2c0) on tqpair=0x1947ec0 00:21:45.000 [2024-07-15 18:47:01.575073] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:45.000 [2024-07-15 18:47:01.575077] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:45.000 [2024-07-15 18:47:01.575085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.000 [2024-07-15 18:47:01.575089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.000 [2024-07-15 18:47:01.575092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1947ec0) 00:21:45.000 [2024-07-15 18:47:01.575097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.000 [2024-07-15 18:47:01.575106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb2c0, cid 3, qid 0 00:21:45.000 [2024-07-15 18:47:01.575209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.000 [2024-07-15 18:47:01.575214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.000 [2024-07-15 18:47:01.575217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.000 [2024-07-15 18:47:01.575221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb2c0) on tqpair=0x1947ec0 00:21:45.000 [2024-07-15 18:47:01.579236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.000 [2024-07-15 18:47:01.579243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.000 [2024-07-15 18:47:01.579246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1947ec0) 00:21:45.000 [2024-07-15 18:47:01.579252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.000 [2024-07-15 18:47:01.579262] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19cb2c0, cid 3, qid 0 00:21:45.000 [2024-07-15 18:47:01.579456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.000 [2024-07-15 18:47:01.579462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.000 [2024-07-15 18:47:01.579465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.000 [2024-07-15 18:47:01.579469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19cb2c0) on tqpair=0x1947ec0 00:21:45.000 [2024-07-15 18:47:01.579475] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:45.000 0% 00:21:45.000 Data Units Read: 0 00:21:45.000 Data Units Written: 0 00:21:45.000 Host Read Commands: 0 00:21:45.000 Host Write Commands: 0 00:21:45.000 Controller Busy Time: 0 minutes 00:21:45.000 Power Cycles: 0 00:21:45.000 Power On Hours: 0 hours 00:21:45.000 Unsafe Shutdowns: 0 00:21:45.000 Unrecoverable Media Errors: 0 00:21:45.000 Lifetime Error Log Entries: 0 00:21:45.000 Warning Temperature Time: 0 minutes 00:21:45.000 Critical Temperature Time: 0 minutes 00:21:45.000 00:21:45.000 Number of Queues 00:21:45.000 ================ 00:21:45.000 Number of I/O Submission Queues: 127 00:21:45.000 Number of I/O Completion Queues: 127 00:21:45.000 00:21:45.000 Active Namespaces 00:21:45.000 ================= 00:21:45.000 Namespace ID:1 00:21:45.000 Error Recovery Timeout: Unlimited 00:21:45.000 Command Set Identifier: NVM (00h) 00:21:45.000 Deallocate: Supported 00:21:45.000 Deallocated/Unwritten Error: Not Supported 00:21:45.000 Deallocated Read Value: Unknown 00:21:45.000 Deallocate in Write Zeroes: Not Supported 00:21:45.000 Deallocated Guard Field: 0xFFFF 00:21:45.000 Flush: Supported 00:21:45.000 Reservation: Supported 00:21:45.000 Namespace Sharing Capabilities: Multiple Controllers 00:21:45.000 Size (in LBAs): 131072 (0GiB) 00:21:45.000 Capacity (in LBAs): 131072 (0GiB) 00:21:45.000 Utilization (in LBAs): 131072 (0GiB) 00:21:45.000 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:45.000 EUI64: ABCDEF0123456789 00:21:45.000 UUID: fabf6dac-e871-4704-aee6-9212b897b1b3 00:21:45.000 Thin Provisioning: Not Supported 00:21:45.000 Per-NS Atomic Units: Yes 00:21:45.000 Atomic Boundary Size (Normal): 0 00:21:45.000 Atomic Boundary Size (PFail): 0 00:21:45.000 Atomic Boundary Offset: 0 00:21:45.000 Maximum Single Source Range Length: 65535 00:21:45.000 Maximum Copy Length: 65535 00:21:45.000 Maximum Source Range Count: 1 00:21:45.000 NGUID/EUI64 Never Reused: No 00:21:45.000 Namespace Write Protected: No 00:21:45.000 Number of LBA Formats: 1 00:21:45.000 Current LBA Format: LBA Format #00 00:21:45.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:45.000 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.000 rmmod nvme_tcp 00:21:45.000 rmmod nvme_fabrics 00:21:45.000 rmmod nvme_keyring 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1170150 ']' 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1170150 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1170150 ']' 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1170150 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.000 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170150 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170150' 00:21:45.259 killing process with pid 1170150 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1170150 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1170150 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.259 18:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.834 18:47:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.834 00:21:47.834 real 0m9.209s 00:21:47.834 user 0m7.504s 00:21:47.834 sys 0m4.397s 00:21:47.834 18:47:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.834 18:47:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.834 ************************************ 00:21:47.834 END TEST nvmf_identify 00:21:47.834 ************************************ 00:21:47.834 18:47:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:47.834 18:47:04 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:47.834 18:47:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.834 18:47:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.834 18:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.834 ************************************ 00:21:47.834 START TEST nvmf_perf 00:21:47.834 ************************************ 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:47.834 * Looking for test storage... 00:21:47.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.834 18:47:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:53.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:53.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:53.116 Found net devices under 0000:86:00.0: cvl_0_0 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:53.116 Found net devices under 0000:86:00.1: cvl_0_1 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.116 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:21:53.117 00:21:53.117 --- 10.0.0.2 ping statistics --- 00:21:53.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.117 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:21:53.117 00:21:53.117 --- 10.0.0.1 ping statistics --- 00:21:53.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.117 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1173840 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1173840 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1173840 ']' 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.117 18:47:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:53.376 [2024-07-15 18:47:09.864829] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:21:53.376 [2024-07-15 18:47:09.864873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.376 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.376 [2024-07-15 18:47:09.925801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.376 [2024-07-15 18:47:10.004601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.376 [2024-07-15 18:47:10.004643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.376 [2024-07-15 18:47:10.004651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.376 [2024-07-15 18:47:10.004658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.376 [2024-07-15 18:47:10.004663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.376 [2024-07-15 18:47:10.004711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.376 [2024-07-15 18:47:10.004732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.376 [2024-07-15 18:47:10.004821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.376 [2024-07-15 18:47:10.004823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:54.316 18:47:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:57.608 18:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:57.608 18:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:57.608 18:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:57.608 18:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:57.608 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:57.608 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:57.608 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:57.608 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:57.608 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:57.608 [2024-07-15 18:47:14.276496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.608 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.869 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:57.869 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.129 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:58.129 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:58.388 18:47:14 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.388 [2024-07-15 18:47:14.991200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.388 18:47:15 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:58.646 18:47:15 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:58.646 18:47:15 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:58.646 18:47:15 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:58.646 18:47:15 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:00.056 Initializing NVMe Controllers 00:22:00.056 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:00.056 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:00.056 Initialization complete. Launching workers. 00:22:00.056 ======================================================== 00:22:00.056 Latency(us) 00:22:00.056 Device Information : IOPS MiB/s Average min max 00:22:00.056 PCIE (0000:5e:00.0) NSID 1 from core 0: 97824.85 382.13 326.84 39.29 4443.07 00:22:00.056 ======================================================== 00:22:00.056 Total : 97824.85 382.13 326.84 39.29 4443.07 00:22:00.056 00:22:00.056 18:47:16 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:00.056 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.992 Initializing NVMe Controllers 00:22:00.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:00.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:00.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:00.993 Initialization complete. Launching workers. 00:22:00.993 ======================================================== 00:22:00.993 Latency(us) 00:22:00.993 Device Information : IOPS MiB/s Average min max 00:22:00.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.00 0.37 10922.70 145.76 45667.89 00:22:00.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.00 0.16 25910.78 7960.54 47886.91 00:22:00.993 ======================================================== 00:22:00.993 Total : 135.00 0.53 15363.61 145.76 47886.91 00:22:00.993 00:22:00.993 18:47:17 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:01.251 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.629 Initializing NVMe Controllers 00:22:02.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:02.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:02.629 Initialization complete. Launching workers. 00:22:02.629 ======================================================== 00:22:02.629 Latency(us) 00:22:02.629 Device Information : IOPS MiB/s Average min max 00:22:02.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10640.82 41.57 3008.24 403.85 6714.92 00:22:02.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3835.93 14.98 8387.27 6443.37 15938.95 00:22:02.629 ======================================================== 00:22:02.629 Total : 14476.75 56.55 4433.53 403.85 15938.95 00:22:02.629 00:22:02.629 18:47:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:02.629 18:47:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:02.629 18:47:18 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:02.629 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.165 Initializing NVMe Controllers 00:22:05.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.165 Controller IO queue size 128, less than required. 00:22:05.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:05.165 Controller IO queue size 128, less than required. 00:22:05.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:05.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:05.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:05.165 Initialization complete. Launching workers. 00:22:05.165 ======================================================== 00:22:05.165 Latency(us) 00:22:05.165 Device Information : IOPS MiB/s Average min max 00:22:05.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1249.28 312.32 105448.31 61439.20 161240.63 00:22:05.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.90 145.47 224745.36 79315.21 345091.31 00:22:05.165 ======================================================== 00:22:05.165 Total : 1831.18 457.79 143357.61 61439.20 345091.31 00:22:05.165 00:22:05.165 18:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:05.165 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.165 No valid NVMe controllers or AIO or URING devices found 00:22:05.165 Initializing NVMe Controllers 00:22:05.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.165 Controller IO queue size 128, less than required. 00:22:05.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:05.165 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:05.165 Controller IO queue size 128, less than required. 00:22:05.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:05.165 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:05.165 WARNING: Some requested NVMe devices were skipped 00:22:05.165 18:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:05.165 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.704 Initializing NVMe Controllers 00:22:07.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.704 Controller IO queue size 128, less than required. 00:22:07.704 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.704 Controller IO queue size 128, less than required. 00:22:07.704 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:07.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:07.704 Initialization complete. Launching workers. 00:22:07.704 00:22:07.704 ==================== 00:22:07.704 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:07.704 TCP transport: 00:22:07.704 polls: 30425 00:22:07.704 idle_polls: 10051 00:22:07.704 sock_completions: 20374 00:22:07.704 nvme_completions: 4967 00:22:07.704 submitted_requests: 7392 00:22:07.704 queued_requests: 1 00:22:07.704 00:22:07.704 ==================== 00:22:07.704 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:07.704 TCP transport: 00:22:07.704 polls: 30728 00:22:07.704 idle_polls: 10980 00:22:07.704 sock_completions: 19748 00:22:07.704 nvme_completions: 5143 00:22:07.704 submitted_requests: 7758 00:22:07.704 queued_requests: 1 00:22:07.704 ======================================================== 00:22:07.704 Latency(us) 00:22:07.704 Device Information : IOPS MiB/s Average min max 00:22:07.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1241.43 310.36 106276.62 60969.92 168081.66 00:22:07.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1285.42 321.36 101117.67 46402.79 160237.07 00:22:07.704 ======================================================== 00:22:07.705 Total : 2526.85 631.71 103652.23 46402.79 168081.66 00:22:07.705 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:07.705 rmmod nvme_tcp 00:22:07.705 rmmod nvme_fabrics 00:22:07.705 rmmod nvme_keyring 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1173840 ']' 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1173840 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1173840 ']' 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1173840 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:07.705 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1173840 00:22:07.964 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:07.964 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:07.964 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1173840' 00:22:07.964 killing process with pid 1173840 00:22:07.964 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1173840 00:22:07.964 18:47:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1173840 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.411 18:47:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.315 18:47:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.315 00:22:11.315 real 0m23.924s 00:22:11.315 user 1m3.774s 00:22:11.315 sys 0m7.248s 00:22:11.315 18:47:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.315 18:47:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:11.315 ************************************ 00:22:11.315 END TEST nvmf_perf 00:22:11.315 ************************************ 00:22:11.315 18:47:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:11.315 18:47:28 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:11.315 18:47:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:11.315 18:47:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.315 18:47:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.574 ************************************ 00:22:11.574 START TEST nvmf_fio_host 00:22:11.574 ************************************ 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:11.574 * Looking for test storage... 00:22:11.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.574 18:47:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:16.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:16.842 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:16.842 Found net devices under 0000:86:00.0: cvl_0_0 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:16.842 Found net devices under 0000:86:00.1: cvl_0_1 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.842 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:22:17.101 00:22:17.101 --- 10.0.0.2 ping statistics --- 00:22:17.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.101 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:22:17.101 00:22:17.101 --- 10.0.0.1 ping statistics --- 00:22:17.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.101 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1179808 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1179808 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1179808 ']' 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.101 18:47:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.101 [2024-07-15 18:47:33.764484] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:22:17.102 [2024-07-15 18:47:33.764533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.102 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.360 [2024-07-15 18:47:33.825613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.360 [2024-07-15 18:47:33.905869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.360 [2024-07-15 18:47:33.905905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.360 [2024-07-15 18:47:33.905912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.360 [2024-07-15 18:47:33.905918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.360 [2024-07-15 18:47:33.905923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.360 [2024-07-15 18:47:33.905964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.360 [2024-07-15 18:47:33.906058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.360 [2024-07-15 18:47:33.906076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.360 [2024-07-15 18:47:33.906077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.927 18:47:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.927 18:47:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:17.927 18:47:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:18.186 [2024-07-15 18:47:34.730568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.186 18:47:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:18.186 18:47:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.186 18:47:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.186 18:47:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:18.445 Malloc1 00:22:18.445 18:47:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.703 18:47:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:18.703 18:47:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.961 [2024-07-15 18:47:35.508891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.961 18:47:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:19.219 18:47:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:19.477 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:19.477 fio-3.35 00:22:19.477 Starting 1 thread 00:22:19.477 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.026 00:22:22.026 test: (groupid=0, jobs=1): err= 0: pid=1180396: Mon Jul 15 18:47:38 2024 00:22:22.026 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(92.9MiB/2005msec) 00:22:22.026 slat (nsec): min=1579, max=244473, avg=1749.80, stdev=2269.84 00:22:22.026 clat (usec): min=3046, max=10634, avg=5956.54, stdev=454.78 00:22:22.026 lat (usec): min=3080, max=10636, avg=5958.29, stdev=454.71 00:22:22.026 clat percentiles (usec): 00:22:22.026 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:22:22.026 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:22:22.026 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:22:22.026 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8291], 99.95th=[ 9503], 00:22:22.026 | 99.99th=[10552] 00:22:22.026 bw ( KiB/s): min=46568, max=47952, per=99.97%, avg=47450.00, stdev=637.01, samples=4 00:22:22.026 iops : min=11642, max=11988, avg=11862.50, stdev=159.25, samples=4 00:22:22.026 write: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.5MiB/2005msec); 0 zone resets 00:22:22.026 slat (nsec): min=1624, max=239859, avg=1835.95, stdev=1746.48 00:22:22.026 clat (usec): min=2491, max=9647, avg=4820.16, stdev=376.95 00:22:22.026 lat (usec): min=2507, max=9649, avg=4822.00, stdev=376.95 00:22:22.026 clat percentiles (usec): 00:22:22.026 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:22:22.026 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:22:22.026 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:22.026 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7767], 99.95th=[ 8455], 00:22:22.026 | 99.99th=[ 9503] 00:22:22.026 bw ( KiB/s): min=46784, max=47808, per=99.99%, avg=47246.00, stdev=429.84, samples=4 00:22:22.026 iops : min=11696, max=11952, avg=11811.50, stdev=107.46, samples=4 00:22:22.026 lat (msec) : 4=0.81%, 10=99.17%, 20=0.02% 00:22:22.026 cpu : usr=71.01%, sys=25.95%, ctx=80, majf=0, minf=6 00:22:22.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:22.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:22.026 issued rwts: total=23791,23684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:22.026 00:22:22.026 Run status group 0 (all jobs): 00:22:22.026 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=92.9MiB (97.4MB), run=2005-2005msec 00:22:22.026 WRITE: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2005-2005msec 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:22.026 18:47:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:22.026 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:22.026 fio-3.35 00:22:22.026 Starting 1 thread 00:22:22.026 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.564 00:22:24.564 test: (groupid=0, jobs=1): err= 0: pid=1180966: Mon Jul 15 18:47:41 2024 00:22:24.564 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(337MiB/2005msec) 00:22:24.564 slat (nsec): min=2625, max=84785, avg=2887.43, stdev=1213.20 00:22:24.564 clat (usec): min=1605, max=14335, avg=6995.38, stdev=1758.57 00:22:24.564 lat (usec): min=1608, max=14338, avg=6998.27, stdev=1758.63 00:22:24.564 clat percentiles (usec): 00:22:24.564 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:22:24.564 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7439], 00:22:24.564 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[ 9241], 95.00th=[10159], 00:22:24.564 | 99.00th=[11731], 99.50th=[12649], 99.90th=[13304], 99.95th=[13435], 00:22:24.564 | 99.99th=[14222] 00:22:24.564 bw ( KiB/s): min=84800, max=91872, per=50.45%, avg=86904.00, stdev=3350.12, samples=4 00:22:24.564 iops : min= 5300, max= 5742, avg=5431.50, stdev=209.38, samples=4 00:22:24.564 write: IOPS=6273, BW=98.0MiB/s (103MB/s)(178MiB/1815msec); 0 zone resets 00:22:24.564 slat (usec): min=30, max=240, avg=32.02, stdev= 4.81 00:22:24.564 clat (usec): min=3337, max=14732, avg=8497.93, stdev=1516.18 00:22:24.564 lat (usec): min=3369, max=14763, avg=8529.94, stdev=1516.56 00:22:24.564 clat percentiles (usec): 00:22:24.564 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7177], 00:22:24.564 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:22:24.564 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:22:24.564 | 99.00th=[12387], 99.50th=[12911], 99.90th=[14615], 99.95th=[14746], 00:22:24.564 | 99.99th=[14746] 00:22:24.564 bw ( KiB/s): min=88768, max=95008, per=90.49%, avg=90832.00, stdev=2849.56, samples=4 00:22:24.564 iops : min= 5548, max= 5938, avg=5677.00, stdev=178.10, samples=4 00:22:24.564 lat (msec) : 2=0.03%, 4=1.62%, 10=88.74%, 20=9.62% 00:22:24.564 cpu : usr=86.23%, sys=12.33%, ctx=38, majf=0, minf=3 00:22:24.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:24.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:24.564 issued rwts: total=21585,11387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:24.564 00:22:24.564 Run status group 0 (all jobs): 00:22:24.564 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=337MiB (354MB), run=2005-2005msec 00:22:24.564 WRITE: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=178MiB (187MB), run=1815-1815msec 00:22:24.564 18:47:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:24.823 rmmod nvme_tcp 00:22:24.823 rmmod nvme_fabrics 00:22:24.823 rmmod nvme_keyring 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1179808 ']' 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1179808 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1179808 ']' 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1179808 00:22:24.823 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:24.824 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.824 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1179808 00:22:24.824 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:24.824 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:24.824 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1179808' 00:22:24.824 killing process with pid 1179808 00:22:24.824 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1179808 00:22:24.824 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1179808 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.083 18:47:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.989 18:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.990 00:22:26.990 real 0m15.630s 00:22:26.990 user 0m47.289s 00:22:26.990 sys 0m6.104s 00:22:26.990 18:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.990 18:47:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.990 ************************************ 00:22:26.990 END TEST nvmf_fio_host 00:22:26.990 ************************************ 00:22:27.248 18:47:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:27.248 18:47:43 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:27.248 18:47:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:27.248 18:47:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.248 18:47:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:27.248 ************************************ 00:22:27.248 START TEST nvmf_failover 00:22:27.248 ************************************ 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:27.248 * Looking for test storage... 00:22:27.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.248 18:47:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.249 18:47:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:32.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:32.583 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:32.583 Found net devices under 0000:86:00.0: cvl_0_0 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:32.583 Found net devices under 0000:86:00.1: cvl_0_1 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.583 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:22:32.842 00:22:32.842 --- 10.0.0.2 ping statistics --- 00:22:32.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.842 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:22:32.842 00:22:32.842 --- 10.0.0.1 ping statistics --- 00:22:32.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.842 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1184730 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1184730 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1184730 ']' 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.842 18:47:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:32.842 [2024-07-15 18:47:49.458747] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:22:32.842 [2024-07-15 18:47:49.458788] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.843 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.843 [2024-07-15 18:47:49.516964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:33.101 [2024-07-15 18:47:49.595711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.102 [2024-07-15 18:47:49.595747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.102 [2024-07-15 18:47:49.595758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.102 [2024-07-15 18:47:49.595764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.102 [2024-07-15 18:47:49.595769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.102 [2024-07-15 18:47:49.595807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.102 [2024-07-15 18:47:49.595891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.102 [2024-07-15 18:47:49.595892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.670 18:47:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.670 18:47:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:33.670 18:47:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.670 18:47:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.670 18:47:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:33.670 18:47:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.670 18:47:50 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:33.928 [2024-07-15 18:47:50.464947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.928 18:47:50 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:34.187 Malloc0 00:22:34.187 18:47:50 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.187 18:47:50 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.446 18:47:51 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.706 [2024-07-15 18:47:51.224476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.706 18:47:51 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:34.706 [2024-07-15 18:47:51.388928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:34.964 [2024-07-15 18:47:51.569509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1185190 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1185190 /var/tmp/bdevperf.sock 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1185190 ']' 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.964 18:47:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:35.900 18:47:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.900 18:47:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:35.900 18:47:52 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.159 NVMe0n1 00:22:36.159 18:47:52 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.747 00:22:36.747 18:47:53 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1185431 00:22:36.747 18:47:53 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:36.747 18:47:53 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:37.692 18:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.692 [2024-07-15 18:47:54.342374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.692 [2024-07-15 18:47:54.342524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.693 [2024-07-15 18:47:54.342961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.342966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.342972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.342977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.342983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.342988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.342996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 [2024-07-15 18:47:54.343137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03080 is same with the state(5) to be set 00:22:37.694 18:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:40.986 18:47:57 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.986 00:22:41.246 18:47:57 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:41.246 [2024-07-15 18:47:57.868904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.868998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.869003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.869010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.246 [2024-07-15 18:47:57.869016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 [2024-07-15 18:47:57.869490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f20 is same with the state(5) to be set 00:22:41.247 18:47:57 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:44.537 18:48:00 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.537 [2024-07-15 18:48:01.070414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.537 18:48:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:45.472 18:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:45.732 [2024-07-15 18:48:02.280430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.732 [2024-07-15 18:48:02.280469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.732 [2024-07-15 18:48:02.280476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 [2024-07-15 18:48:02.280994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04aa0 is same with the state(5) to be set 00:22:45.733 18:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1185431 00:22:52.306 0 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1185190 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1185190 ']' 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1185190 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1185190 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1185190' 00:22:52.306 killing process with pid 1185190 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1185190 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1185190 00:22:52.306 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.306 [2024-07-15 18:47:51.641960] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:22:52.306 [2024-07-15 18:47:51.642008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185190 ] 00:22:52.306 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.306 [2024-07-15 18:47:51.696833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.306 [2024-07-15 18:47:51.771869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.306 Running I/O for 15 seconds... 00:22:52.306 [2024-07-15 18:47:54.344336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.306 [2024-07-15 18:47:54.344652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.306 [2024-07-15 18:47:54.344660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.344988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.344996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.307 [2024-07-15 18:47:54.345282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.307 [2024-07-15 18:47:54.345289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.308 [2024-07-15 18:47:54.345304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.308 [2024-07-15 18:47:54.345318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.308 [2024-07-15 18:47:54.345897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.308 [2024-07-15 18:47:54.345905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.345912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.309 [2024-07-15 18:47:54.345919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.345926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.309 [2024-07-15 18:47:54.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.345940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.309 [2024-07-15 18:47:54.345948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.345956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.309 [2024-07-15 18:47:54.345962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.345970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.309 [2024-07-15 18:47:54.345976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.345984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.309 [2024-07-15 18:47:54.345990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.345997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.309 [2024-07-15 18:47:54.346004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.346353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.346359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.346364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.346370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.358647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.358658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.358665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.358673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.358680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.309 [2024-07-15 18:47:54.358686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.309 [2024-07-15 18:47:54.358694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:22:52.309 [2024-07-15 18:47:54.358701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.358743] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a39300 was disconnected and freed. reset controller. 00:22:52.309 [2024-07-15 18:47:54.358753] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:52.309 [2024-07-15 18:47:54.358777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.309 [2024-07-15 18:47:54.358786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.358794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.309 [2024-07-15 18:47:54.358801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.358810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.309 [2024-07-15 18:47:54.358818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.358825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.309 [2024-07-15 18:47:54.358832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:54.358839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.309 [2024-07-15 18:47:54.358869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1b540 (9): Bad file descriptor 00:22:52.309 [2024-07-15 18:47:54.361887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.309 [2024-07-15 18:47:54.391047] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.309 [2024-07-15 18:47:57.870896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.309 [2024-07-15 18:47:57.870932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.309 [2024-07-15 18:47:57.870947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.870955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.870965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.870972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.870980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.870986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.870995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.310 [2024-07-15 18:47:57.871293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.310 [2024-07-15 18:47:57.871453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.310 [2024-07-15 18:47:57.871461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.311 [2024-07-15 18:47:57.871538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.871987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.871994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.872002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.872008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.872016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.872023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.872031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.872038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.872046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.872052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.872060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.872066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.872074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.311 [2024-07-15 18:47:57.872081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.311 [2024-07-15 18:47:57.872089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.312 [2024-07-15 18:47:57.872337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19368 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19376 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19384 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19400 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19408 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19416 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19432 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19440 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19448 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19464 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19472 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19480 len:8 PRP1 0x0 PRP2 0x0 00:22:52.312 [2024-07-15 18:47:57.872710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.312 [2024-07-15 18:47:57.872717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.312 [2024-07-15 18:47:57.872722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.312 [2024-07-15 18:47:57.872727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.872733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.872739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.872744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.872749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19496 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.872756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.872768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.872773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.872778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19504 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.872784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.872790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.872795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.872800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19512 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.872807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.872813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.872819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.872824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.872830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.872837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.872842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.872847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19528 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.872853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.872860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.872865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.872871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19536 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.872877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.872884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.872889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19544 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19560 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19568 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19576 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19592 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19600 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19608 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.313 [2024-07-15 18:47:57.884918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.313 [2024-07-15 18:47:57.884923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19624 len:8 PRP1 0x0 PRP2 0x0 00:22:52.313 [2024-07-15 18:47:57.884930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.884974] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1be6380 was disconnected and freed. reset controller. 00:22:52.313 [2024-07-15 18:47:57.884984] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:52.313 [2024-07-15 18:47:57.885007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.313 [2024-07-15 18:47:57.885015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.885023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.313 [2024-07-15 18:47:57.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.885040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.313 [2024-07-15 18:47:57.885047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.885054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.313 [2024-07-15 18:47:57.885061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:47:57.885069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.313 [2024-07-15 18:47:57.885093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1b540 (9): Bad file descriptor 00:22:52.313 [2024-07-15 18:47:57.888094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.313 [2024-07-15 18:47:57.924733] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.313 [2024-07-15 18:48:02.281592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.313 [2024-07-15 18:48:02.281627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:48:02.281642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.313 [2024-07-15 18:48:02.281654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:48:02.281663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.313 [2024-07-15 18:48:02.281672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:48:02.281680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.313 [2024-07-15 18:48:02.281687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:48:02.281695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.313 [2024-07-15 18:48:02.281702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:48:02.281710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.313 [2024-07-15 18:48:02.281716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.313 [2024-07-15 18:48:02.281725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.313 [2024-07-15 18:48:02.281731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.281986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.281992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.314 [2024-07-15 18:48:02.282234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.314 [2024-07-15 18:48:02.282242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.315 [2024-07-15 18:48:02.282557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.315 [2024-07-15 18:48:02.282869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.315 [2024-07-15 18:48:02.282878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.282885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.282899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.282913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.316 [2024-07-15 18:48:02.282929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.316 [2024-07-15 18:48:02.282944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.316 [2024-07-15 18:48:02.282959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.316 [2024-07-15 18:48:02.282976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.316 [2024-07-15 18:48:02.282991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.282999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.316 [2024-07-15 18:48:02.283005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.316 [2024-07-15 18:48:02.283020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.316 [2024-07-15 18:48:02.283436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.316 [2024-07-15 18:48:02.283463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25000 len:8 PRP1 0x0 PRP2 0x0 00:22:52.316 [2024-07-15 18:48:02.283469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.316 [2024-07-15 18:48:02.283484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.316 [2024-07-15 18:48:02.283489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25008 len:8 PRP1 0x0 PRP2 0x0 00:22:52.316 [2024-07-15 18:48:02.283496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.316 [2024-07-15 18:48:02.283507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.316 [2024-07-15 18:48:02.283512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25016 len:8 PRP1 0x0 PRP2 0x0 00:22:52.316 [2024-07-15 18:48:02.283519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.316 [2024-07-15 18:48:02.283525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.316 [2024-07-15 18:48:02.283531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.317 [2024-07-15 18:48:02.283536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:8 PRP1 0x0 PRP2 0x0 00:22:52.317 [2024-07-15 18:48:02.283544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.317 [2024-07-15 18:48:02.283552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.317 [2024-07-15 18:48:02.283557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.317 [2024-07-15 18:48:02.283563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25032 len:8 PRP1 0x0 PRP2 0x0 00:22:52.317 [2024-07-15 18:48:02.283569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.317 [2024-07-15 18:48:02.283575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.317 [2024-07-15 18:48:02.283581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.317 [2024-07-15 18:48:02.283586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25040 len:8 PRP1 0x0 PRP2 0x0 00:22:52.317 [2024-07-15 18:48:02.283593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.317 [2024-07-15 18:48:02.283634] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1be6170 was disconnected and freed. reset controller. 00:22:52.317 [2024-07-15 18:48:02.283643] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:52.317 [2024-07-15 18:48:02.283664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.317 [2024-07-15 18:48:02.283671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.317 [2024-07-15 18:48:02.283679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.317 [2024-07-15 18:48:02.283685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.317 [2024-07-15 18:48:02.283693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.317 [2024-07-15 18:48:02.283699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.317 [2024-07-15 18:48:02.283706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.317 [2024-07-15 18:48:02.283712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.317 [2024-07-15 18:48:02.283718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.317 [2024-07-15 18:48:02.283741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1b540 (9): Bad file descriptor 00:22:52.317 [2024-07-15 18:48:02.299639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.317 [2024-07-15 18:48:02.415604] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.317 00:22:52.317 Latency(us) 00:22:52.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.317 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:52.317 Verification LBA range: start 0x0 length 0x4000 00:22:52.317 NVMe0n1 : 15.01 10939.29 42.73 522.18 0.00 11145.54 420.29 23706.94 00:22:52.317 =================================================================================================================== 00:22:52.317 Total : 10939.29 42.73 522.18 0.00 11145.54 420.29 23706.94 00:22:52.317 Received shutdown signal, test time was about 15.000000 seconds 00:22:52.317 00:22:52.317 Latency(us) 00:22:52.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.317 =================================================================================================================== 00:22:52.317 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1188207 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1188207 /var/tmp/bdevperf.sock 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1188207 ']' 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.317 18:48:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:52.915 18:48:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.915 18:48:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:52.915 18:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:52.915 [2024-07-15 18:48:09.586890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:52.915 18:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:53.174 [2024-07-15 18:48:09.779440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:53.174 18:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.433 NVMe0n1 00:22:53.433 18:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.691 00:22:53.691 18:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.950 00:22:53.950 18:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.950 18:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:54.209 18:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.469 18:48:11 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:57.757 18:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:57.757 18:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.757 18:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.757 18:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1189397 00:22:57.758 18:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1189397 00:22:58.695 0 00:22:58.695 18:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:58.695 [2024-07-15 18:48:08.604819] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:22:58.695 [2024-07-15 18:48:08.604872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188207 ] 00:22:58.695 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.695 [2024-07-15 18:48:08.659944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.695 [2024-07-15 18:48:08.729342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.695 [2024-07-15 18:48:10.987968] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:58.695 [2024-07-15 18:48:10.988018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.695 [2024-07-15 18:48:10.988030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.695 [2024-07-15 18:48:10.988039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.695 [2024-07-15 18:48:10.988047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.695 [2024-07-15 18:48:10.988055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.695 [2024-07-15 18:48:10.988062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.695 [2024-07-15 18:48:10.988069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.695 [2024-07-15 18:48:10.988076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.695 [2024-07-15 18:48:10.988083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.695 [2024-07-15 18:48:10.988112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.695 [2024-07-15 18:48:10.988126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d5540 (9): Bad file descriptor 00:22:58.695 [2024-07-15 18:48:11.036338] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.695 Running I/O for 1 seconds... 00:22:58.695 00:22:58.695 Latency(us) 00:22:58.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.695 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:58.695 Verification LBA range: start 0x0 length 0x4000 00:22:58.695 NVMe0n1 : 1.01 10997.41 42.96 0.00 0.00 11594.10 2507.46 11055.64 00:22:58.695 =================================================================================================================== 00:22:58.695 Total : 10997.41 42.96 0.00 0.00 11594.10 2507.46 11055.64 00:22:58.695 18:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.695 18:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:58.954 18:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.213 18:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.213 18:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:59.213 18:48:15 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.473 18:48:16 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1188207 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1188207 ']' 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1188207 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1188207 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1188207' 00:23:02.759 killing process with pid 1188207 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1188207 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1188207 00:23:02.759 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:02.760 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.018 rmmod nvme_tcp 00:23:03.018 rmmod nvme_fabrics 00:23:03.018 rmmod nvme_keyring 00:23:03.018 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1184730 ']' 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1184730 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1184730 ']' 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1184730 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184730 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184730' 00:23:03.275 killing process with pid 1184730 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1184730 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1184730 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.275 18:48:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.805 18:48:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.805 00:23:05.805 real 0m38.290s 00:23:05.805 user 2m3.237s 00:23:05.805 sys 0m7.520s 00:23:05.805 18:48:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.805 18:48:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:05.805 ************************************ 00:23:05.805 END TEST nvmf_failover 00:23:05.805 ************************************ 00:23:05.805 18:48:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:05.805 18:48:22 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:05.805 18:48:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:05.805 18:48:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.805 18:48:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.805 ************************************ 00:23:05.805 START TEST nvmf_host_discovery 00:23:05.805 ************************************ 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:05.805 * Looking for test storage... 00:23:05.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.805 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.806 18:48:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:11.078 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:11.078 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:11.078 Found net devices under 0000:86:00.0: cvl_0_0 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:11.078 Found net devices under 0000:86:00.1: cvl_0_1 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.078 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:23:11.338 00:23:11.338 --- 10.0.0.2 ping statistics --- 00:23:11.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.338 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:23:11.338 00:23:11.338 --- 10.0.0.1 ping statistics --- 00:23:11.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.338 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1193714 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1193714 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1193714 ']' 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.338 18:48:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.338 [2024-07-15 18:48:27.881118] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:23:11.338 [2024-07-15 18:48:27.881158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.338 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.338 [2024-07-15 18:48:27.935624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.338 [2024-07-15 18:48:28.014575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.338 [2024-07-15 18:48:28.014616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.338 [2024-07-15 18:48:28.014623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.338 [2024-07-15 18:48:28.014629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.338 [2024-07-15 18:48:28.014635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.338 [2024-07-15 18:48:28.014652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 [2024-07-15 18:48:28.717956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 [2024-07-15 18:48:28.726094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.275 null0 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:12.275 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.276 null1 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1193857 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1193857 /tmp/host.sock 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1193857 ']' 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:12.276 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:12.276 18:48:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.276 [2024-07-15 18:48:28.801284] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:23:12.276 [2024-07-15 18:48:28.801325] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193857 ] 00:23:12.276 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.276 [2024-07-15 18:48:28.854106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.276 [2024-07-15 18:48:28.929178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:13.209 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.210 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 [2024-07-15 18:48:29.925274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.469 18:48:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.469 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:13.470 18:48:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:14.037 [2024-07-15 18:48:30.656487] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:14.037 [2024-07-15 18:48:30.656508] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:14.037 [2024-07-15 18:48:30.656521] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.037 [2024-07-15 18:48:30.742786] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:14.298 [2024-07-15 18:48:30.840929] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:14.298 [2024-07-15 18:48:30.840948] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:14.611 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.869 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:14.869 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.870 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 [2024-07-15 18:48:31.633963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.129 [2024-07-15 18:48:31.634328] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:15.129 [2024-07-15 18:48:31.634355] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:15.129 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.130 [2024-07-15 18:48:31.720916] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.130 [2024-07-15 18:48:31.781411] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:15.130 [2024-07-15 18:48:31.781428] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:15.130 [2024-07-15 18:48:31.781433] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:15.130 18:48:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.509 [2024-07-15 18:48:32.885793] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:16.509 [2024-07-15 18:48:32.885815] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.509 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:16.510 [2024-07-15 18:48:32.894395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.510 [2024-07-15 18:48:32.894413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.510 [2024-07-15 18:48:32.894422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.510 [2024-07-15 18:48:32.894429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.510 [2024-07-15 18:48:32.894436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.510 [2024-07-15 18:48:32.894443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.510 [2024-07-15 18:48:32.894450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.510 [2024-07-15 18:48:32.894456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.510 [2024-07-15 18:48:32.894463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18baf10 is same with the state(5) to be set 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.510 [2024-07-15 18:48:32.904408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baf10 (9): Bad file descriptor 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.510 [2024-07-15 18:48:32.914446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.510 [2024-07-15 18:48:32.914693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.510 [2024-07-15 18:48:32.914711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18baf10 with addr=10.0.0.2, port=4420 00:23:16.510 [2024-07-15 18:48:32.914718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18baf10 is same with the state(5) to be set 00:23:16.510 [2024-07-15 18:48:32.914730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baf10 (9): Bad file descriptor 00:23:16.510 [2024-07-15 18:48:32.914747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.510 [2024-07-15 18:48:32.914754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.510 [2024-07-15 18:48:32.914762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.510 [2024-07-15 18:48:32.914772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.510 [2024-07-15 18:48:32.924499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.510 [2024-07-15 18:48:32.924795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.510 [2024-07-15 18:48:32.924808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18baf10 with addr=10.0.0.2, port=4420 00:23:16.510 [2024-07-15 18:48:32.924815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18baf10 is same with the state(5) to be set 00:23:16.510 [2024-07-15 18:48:32.924826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baf10 (9): Bad file descriptor 00:23:16.510 [2024-07-15 18:48:32.924835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.510 [2024-07-15 18:48:32.924841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.510 [2024-07-15 18:48:32.924848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.510 [2024-07-15 18:48:32.924858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.510 [2024-07-15 18:48:32.934549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.510 [2024-07-15 18:48:32.934826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.510 [2024-07-15 18:48:32.934840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18baf10 with addr=10.0.0.2, port=4420 00:23:16.510 [2024-07-15 18:48:32.934847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18baf10 is same with the state(5) to be set 00:23:16.510 [2024-07-15 18:48:32.934859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baf10 (9): Bad file descriptor 00:23:16.510 [2024-07-15 18:48:32.934875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.510 [2024-07-15 18:48:32.934882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.510 [2024-07-15 18:48:32.934889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.510 [2024-07-15 18:48:32.934898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:16.510 [2024-07-15 18:48:32.944601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.510 [2024-07-15 18:48:32.944806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.510 [2024-07-15 18:48:32.944820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18baf10 with addr=10.0.0.2, port=4420 00:23:16.510 [2024-07-15 18:48:32.944827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18baf10 is same with the state(5) to be set 00:23:16.510 [2024-07-15 18:48:32.944837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baf10 (9): Bad file descriptor 00:23:16.510 [2024-07-15 18:48:32.944848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.510 [2024-07-15 18:48:32.944854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.510 [2024-07-15 18:48:32.944861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.510 [2024-07-15 18:48:32.944871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.510 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.510 [2024-07-15 18:48:32.954654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.510 [2024-07-15 18:48:32.954913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.510 [2024-07-15 18:48:32.954927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18baf10 with addr=10.0.0.2, port=4420 00:23:16.510 [2024-07-15 18:48:32.954934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18baf10 is same with the state(5) to be set 00:23:16.510 [2024-07-15 18:48:32.954945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baf10 (9): Bad file descriptor 00:23:16.510 [2024-07-15 18:48:32.954972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.510 [2024-07-15 18:48:32.954979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.511 [2024-07-15 18:48:32.954986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.511 [2024-07-15 18:48:32.955012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.511 [2024-07-15 18:48:32.964708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.511 [2024-07-15 18:48:32.964974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.511 [2024-07-15 18:48:32.964987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18baf10 with addr=10.0.0.2, port=4420 00:23:16.511 [2024-07-15 18:48:32.964995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18baf10 is same with the state(5) to be set 00:23:16.511 [2024-07-15 18:48:32.965005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baf10 (9): Bad file descriptor 00:23:16.511 [2024-07-15 18:48:32.965015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.511 [2024-07-15 18:48:32.965021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.511 [2024-07-15 18:48:32.965032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.511 [2024-07-15 18:48:32.965041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.511 [2024-07-15 18:48:32.971602] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:16.511 [2024-07-15 18:48:32.971618] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.511 18:48:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.511 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.769 18:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.705 [2024-07-15 18:48:34.315338] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:17.705 [2024-07-15 18:48:34.315354] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:17.705 [2024-07-15 18:48:34.315367] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.705 [2024-07-15 18:48:34.403635] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:18.275 [2024-07-15 18:48:34.715811] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:18.275 [2024-07-15 18:48:34.715838] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.275 request: 00:23:18.275 { 00:23:18.275 "name": "nvme", 00:23:18.275 "trtype": "tcp", 00:23:18.275 "traddr": "10.0.0.2", 00:23:18.275 "adrfam": "ipv4", 00:23:18.275 "trsvcid": "8009", 00:23:18.275 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:18.275 "wait_for_attach": true, 00:23:18.275 "method": "bdev_nvme_start_discovery", 00:23:18.275 "req_id": 1 00:23:18.275 } 00:23:18.275 Got JSON-RPC error response 00:23:18.275 response: 00:23:18.275 { 00:23:18.275 "code": -17, 00:23:18.275 "message": "File exists" 00:23:18.275 } 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.275 request: 00:23:18.275 { 00:23:18.275 "name": "nvme_second", 00:23:18.275 "trtype": "tcp", 00:23:18.275 "traddr": "10.0.0.2", 00:23:18.275 "adrfam": "ipv4", 00:23:18.275 "trsvcid": "8009", 00:23:18.275 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:18.275 "wait_for_attach": true, 00:23:18.275 "method": "bdev_nvme_start_discovery", 00:23:18.275 "req_id": 1 00:23:18.275 } 00:23:18.275 Got JSON-RPC error response 00:23:18.275 response: 00:23:18.275 { 00:23:18.275 "code": -17, 00:23:18.275 "message": "File exists" 00:23:18.275 } 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:18.275 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.276 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:18.276 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.276 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:18.276 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.276 18:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.650 [2024-07-15 18:48:35.959344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.650 [2024-07-15 18:48:35.959373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7a00 with addr=10.0.0.2, port=8010 00:23:19.650 [2024-07-15 18:48:35.959387] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:19.650 [2024-07-15 18:48:35.959393] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:19.650 [2024-07-15 18:48:35.959399] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:20.583 [2024-07-15 18:48:36.961766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.583 [2024-07-15 18:48:36.961790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f7a00 with addr=10.0.0.2, port=8010 00:23:20.583 [2024-07-15 18:48:36.961801] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:20.583 [2024-07-15 18:48:36.961807] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:20.583 [2024-07-15 18:48:36.961814] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:21.519 [2024-07-15 18:48:37.963924] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:21.519 request: 00:23:21.519 { 00:23:21.519 "name": "nvme_second", 00:23:21.519 "trtype": "tcp", 00:23:21.519 "traddr": "10.0.0.2", 00:23:21.519 "adrfam": "ipv4", 00:23:21.519 "trsvcid": "8010", 00:23:21.519 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:21.519 "wait_for_attach": false, 00:23:21.519 "attach_timeout_ms": 3000, 00:23:21.519 "method": "bdev_nvme_start_discovery", 00:23:21.519 "req_id": 1 00:23:21.519 } 00:23:21.519 Got JSON-RPC error response 00:23:21.519 response: 00:23:21.519 { 00:23:21.519 "code": -110, 00:23:21.519 "message": "Connection timed out" 00:23:21.519 } 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:21.519 18:48:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1193857 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.519 rmmod nvme_tcp 00:23:21.519 rmmod nvme_fabrics 00:23:21.519 rmmod nvme_keyring 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1193714 ']' 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1193714 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1193714 ']' 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1193714 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1193714 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1193714' 00:23:21.519 killing process with pid 1193714 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1193714 00:23:21.519 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1193714 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.779 18:48:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.684 18:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:23.684 00:23:23.684 real 0m18.266s 00:23:23.684 user 0m22.998s 00:23:23.684 sys 0m5.560s 00:23:23.684 18:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:23.684 18:48:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.684 ************************************ 00:23:23.684 END TEST nvmf_host_discovery 00:23:23.684 ************************************ 00:23:23.943 18:48:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:23.943 18:48:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:23.943 18:48:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:23.943 18:48:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.943 18:48:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:23.943 ************************************ 00:23:23.943 START TEST nvmf_host_multipath_status 00:23:23.943 ************************************ 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:23.943 * Looking for test storage... 00:23:23.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:23.943 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.944 18:48:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:29.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:29.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.224 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:29.225 Found net devices under 0000:86:00.0: cvl_0_0 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:29.225 Found net devices under 0000:86:00.1: cvl_0_1 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:23:29.225 00:23:29.225 --- 10.0.0.2 ping statistics --- 00:23:29.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.225 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:23:29.225 00:23:29.225 --- 10.0.0.1 ping statistics --- 00:23:29.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.225 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1198923 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1198923 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1198923 ']' 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:29.225 18:48:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.225 [2024-07-15 18:48:45.441895] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:23:29.225 [2024-07-15 18:48:45.441938] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.225 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.225 [2024-07-15 18:48:45.497394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:29.225 [2024-07-15 18:48:45.576058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.225 [2024-07-15 18:48:45.576092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.225 [2024-07-15 18:48:45.576099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.225 [2024-07-15 18:48:45.576105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.225 [2024-07-15 18:48:45.576110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.225 [2024-07-15 18:48:45.576150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.225 [2024-07-15 18:48:45.576152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.791 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.791 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:29.791 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.791 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.791 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.791 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.791 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1198923 00:23:29.792 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:29.792 [2024-07-15 18:48:46.427480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.792 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:30.050 Malloc0 00:23:30.050 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:30.307 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.307 18:48:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.565 [2024-07-15 18:48:47.103137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.565 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:30.565 [2024-07-15 18:48:47.263543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1199184 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1199184 /var/tmp/bdevperf.sock 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1199184 ']' 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:30.823 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:31.082 18:48:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:31.649 Nvme0n1 00:23:31.649 18:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:31.908 Nvme0n1 00:23:31.908 18:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:31.908 18:48:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:33.848 18:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:33.848 18:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:34.106 18:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:34.365 18:48:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:35.298 18:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:35.298 18:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.298 18:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.298 18:48:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.556 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.814 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.814 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.814 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.814 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.072 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:36.330 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.330 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:36.330 18:48:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.588 18:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:36.846 18:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.804 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:38.061 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.061 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:38.061 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.061 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:38.318 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.318 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:38.318 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.318 18:48:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:38.576 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.833 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.833 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:38.833 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.090 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:39.347 18:48:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:40.287 18:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:40.287 18:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:40.287 18:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.287 18:48:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.545 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.803 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.803 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.803 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.803 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.061 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.061 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:41.061 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.061 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.360 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.360 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:41.360 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.360 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.360 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.360 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:41.360 18:48:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:41.619 18:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:41.878 18:48:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:42.814 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:42.814 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:42.814 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.814 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.073 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.332 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.332 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.332 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.332 18:48:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.605 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.605 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:43.605 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.605 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.605 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.605 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:43.605 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.606 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.864 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.864 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:43.864 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:44.121 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:44.379 18:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:45.310 18:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:45.310 18:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:45.310 18:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.310 18:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.568 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.826 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.826 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.826 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.826 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.083 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.339 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.339 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:46.339 18:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:46.597 18:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.597 18:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.971 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.229 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.229 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.229 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.229 18:49:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.542 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.801 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.801 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:49.060 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:49.060 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:49.318 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.318 18:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:50.694 18:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:50.694 18:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:50.694 18:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.694 18:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.694 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.953 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.953 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.953 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.953 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.210 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.210 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.210 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.210 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.467 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.467 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.467 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.467 18:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.467 18:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.467 18:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:51.467 18:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.724 18:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:51.981 18:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:52.914 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:52.914 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:52.914 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.914 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.172 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.172 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:53.172 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.172 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.430 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.430 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.430 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.430 18:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.430 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.430 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.430 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.430 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.688 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.688 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.688 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.688 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.948 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.948 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:53.948 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.948 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.948 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.948 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:53.948 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:54.206 18:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:54.464 18:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:55.400 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:55.400 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.400 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.400 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.659 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.659 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:55.659 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.659 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.918 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.177 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.177 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.177 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.177 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.435 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.435 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:56.435 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.435 18:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.694 18:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.694 18:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:56.694 18:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:56.694 18:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:56.952 18:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:57.886 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:57.886 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:57.886 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.886 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.146 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.146 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:58.146 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.146 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.406 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.406 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.406 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.406 18:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.665 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.924 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.924 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:58.924 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.924 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1199184 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1199184 ']' 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1199184 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1199184 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1199184' 00:23:59.182 killing process with pid 1199184 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1199184 00:23:59.182 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1199184 00:23:59.182 Connection closed with partial response: 00:23:59.182 00:23:59.182 00:23:59.464 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1199184 00:23:59.464 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:59.464 [2024-07-15 18:48:47.308328] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:23:59.464 [2024-07-15 18:48:47.308379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199184 ] 00:23:59.464 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.464 [2024-07-15 18:48:47.358811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.464 [2024-07-15 18:48:47.433332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.464 Running I/O for 90 seconds... 00:23:59.464 [2024-07-15 18:49:00.652653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.652986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.652997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.653005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.653019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.653028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.653042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.653050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.464 [2024-07-15 18:49:00.653064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.464 [2024-07-15 18:49:00.653072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.465 [2024-07-15 18:49:00.653134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.465 [2024-07-15 18:49:00.653153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.465 [2024-07-15 18:49:00.653174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.465 [2024-07-15 18:49:00.653195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.465 [2024-07-15 18:49:00.653539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.653991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.653999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.465 [2024-07-15 18:49:00.654231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.465 [2024-07-15 18:49:00.654239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.654992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.654999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.466 [2024-07-15 18:49:00.655401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.466 [2024-07-15 18:49:00.655413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.655420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.655439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.655459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.655823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.467 [2024-07-15 18:49:00.655845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.467 [2024-07-15 18:49:00.655865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.467 [2024-07-15 18:49:00.655886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.467 [2024-07-15 18:49:00.655907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.467 [2024-07-15 18:49:00.655926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.467 [2024-07-15 18:49:00.655947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.467 [2024-07-15 18:49:00.655966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.655986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.655998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.467 [2024-07-15 18:49:00.656855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.467 [2024-07-15 18:49:00.656868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.656884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.656897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.656905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.656919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.656926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.656938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.656945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.656957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.656965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.656977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.468 [2024-07-15 18:49:00.656984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.656997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.468 [2024-07-15 18:49:00.657003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.468 [2024-07-15 18:49:00.657023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.468 [2024-07-15 18:49:00.657042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.468 [2024-07-15 18:49:00.657061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.468 [2024-07-15 18:49:00.657789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.468 [2024-07-15 18:49:00.657797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.657809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.657815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.657827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.657835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.657847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.657854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.668985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.668992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.469 [2024-07-15 18:49:00.669398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.469 [2024-07-15 18:49:00.669404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.470 [2024-07-15 18:49:00.669705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.470 [2024-07-15 18:49:00.669724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.470 [2024-07-15 18:49:00.669743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.470 [2024-07-15 18:49:00.669763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.470 [2024-07-15 18:49:00.669782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.470 [2024-07-15 18:49:00.669802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.470 [2024-07-15 18:49:00.669822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.669981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.669993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.470 [2024-07-15 18:49:00.670237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.470 [2024-07-15 18:49:00.670244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.471 [2024-07-15 18:49:00.670568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.471 [2024-07-15 18:49:00.670588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.471 [2024-07-15 18:49:00.670607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.471 [2024-07-15 18:49:00.670628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.471 [2024-07-15 18:49:00.670648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.670836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.670843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.471 [2024-07-15 18:49:00.671877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.471 [2024-07-15 18:49:00.671884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.671897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.671905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.671921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.671929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.671941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.671948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.671961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.671968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.671982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.472 [2024-07-15 18:49:00.672925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.472 [2024-07-15 18:49:00.672933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.679986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.679994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.473 [2024-07-15 18:49:00.680197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.473 [2024-07-15 18:49:00.680217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.473 [2024-07-15 18:49:00.680242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.473 [2024-07-15 18:49:00.680264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.473 [2024-07-15 18:49:00.680287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.473 [2024-07-15 18:49:00.680306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.473 [2024-07-15 18:49:00.680327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.473 [2024-07-15 18:49:00.680546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.473 [2024-07-15 18:49:00.680560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.680981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.680989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.474 [2024-07-15 18:49:00.681072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.474 [2024-07-15 18:49:00.681092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.474 [2024-07-15 18:49:00.681113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.474 [2024-07-15 18:49:00.681133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.474 [2024-07-15 18:49:00.681152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.474 [2024-07-15 18:49:00.681399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.474 [2024-07-15 18:49:00.681412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.681974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.681981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.682906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.682927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.682942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.682950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.682963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.682970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.682983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.682991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.475 [2024-07-15 18:49:00.683177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.475 [2024-07-15 18:49:00.683190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.683870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.476 [2024-07-15 18:49:00.683890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.476 [2024-07-15 18:49:00.683911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.476 [2024-07-15 18:49:00.683931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.476 [2024-07-15 18:49:00.683951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.476 [2024-07-15 18:49:00.683970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.683983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.476 [2024-07-15 18:49:00.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.684003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.476 [2024-07-15 18:49:00.684010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.684024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.684032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.684306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.684318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.684332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.684340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.684354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.684365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.684378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.684385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.476 [2024-07-15 18:49:00.684399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.476 [2024-07-15 18:49:00.684406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.684946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.684954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.685258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.685279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.685300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.477 [2024-07-15 18:49:00.685320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.477 [2024-07-15 18:49:00.685339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.477 [2024-07-15 18:49:00.685359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.477 [2024-07-15 18:49:00.685379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.477 [2024-07-15 18:49:00.685402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.685422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.685442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.477 [2024-07-15 18:49:00.685455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.477 [2024-07-15 18:49:00.685462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.685819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.685826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.478 [2024-07-15 18:49:00.686554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.478 [2024-07-15 18:49:00.686561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.686983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.686991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.687153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.687166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.691762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.691779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.691788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.691801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.691808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.691821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.691829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.691842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.691849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.479 [2024-07-15 18:49:00.692332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.479 [2024-07-15 18:49:00.692352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.479 [2024-07-15 18:49:00.692374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.479 [2024-07-15 18:49:00.692395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.479 [2024-07-15 18:49:00.692415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.479 [2024-07-15 18:49:00.692435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.479 [2024-07-15 18:49:00.692457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.479 [2024-07-15 18:49:00.692470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.479 [2024-07-15 18:49:00.692477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.692983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.692996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.480 [2024-07-15 18:49:00.693206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.480 [2024-07-15 18:49:00.693229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.480 [2024-07-15 18:49:00.693250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.480 [2024-07-15 18:49:00.693270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.480 [2024-07-15 18:49:00.693290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.480 [2024-07-15 18:49:00.693311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.480 [2024-07-15 18:49:00.693325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.693985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.693993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.694006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.694013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.694025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.694033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.694046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.694053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.481 [2024-07-15 18:49:00.694066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.481 [2024-07-15 18:49:00.694073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.694085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.694099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.694112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.694119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.694133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.694140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.694152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.694159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.694172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.694180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.694192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.694199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.694215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.694223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.695984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.695997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.696004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.482 [2024-07-15 18:49:00.696017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.482 [2024-07-15 18:49:00.696024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.483 [2024-07-15 18:49:00.696067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.483 [2024-07-15 18:49:00.696088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.483 [2024-07-15 18:49:00.696108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.483 [2024-07-15 18:49:00.696128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.483 [2024-07-15 18:49:00.696147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.483 [2024-07-15 18:49:00.696167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.483 [2024-07-15 18:49:00.696187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.696982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.696991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.483 [2024-07-15 18:49:00.697213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.483 [2024-07-15 18:49:00.697221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.484 [2024-07-15 18:49:00.697289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.484 [2024-07-15 18:49:00.697309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.484 [2024-07-15 18:49:00.697329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.484 [2024-07-15 18:49:00.697350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.484 [2024-07-15 18:49:00.697369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.697992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.697999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.484 [2024-07-15 18:49:00.698322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.484 [2024-07-15 18:49:00.698329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.698981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.698994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.485 [2024-07-15 18:49:00.699545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.485 [2024-07-15 18:49:00.699557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.699899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.486 [2024-07-15 18:49:00.699920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.486 [2024-07-15 18:49:00.699941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.486 [2024-07-15 18:49:00.699962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.486 [2024-07-15 18:49:00.699983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.699997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.486 [2024-07-15 18:49:00.700005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.486 [2024-07-15 18:49:00.700026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.486 [2024-07-15 18:49:00.700048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.700941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.700948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.701113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.486 [2024-07-15 18:49:00.701123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.486 [2024-07-15 18:49:00.701136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.487 [2024-07-15 18:49:00.701374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.487 [2024-07-15 18:49:00.701394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.487 [2024-07-15 18:49:00.701413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.487 [2024-07-15 18:49:00.701433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.487 [2024-07-15 18:49:00.701452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.701979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.701996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.487 [2024-07-15 18:49:00.702212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.487 [2024-07-15 18:49:00.702229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.702986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.702994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.488 [2024-07-15 18:49:00.703521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.488 [2024-07-15 18:49:00.703534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.703912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.489 [2024-07-15 18:49:00.703933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.489 [2024-07-15 18:49:00.703956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.489 [2024-07-15 18:49:00.703976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.703989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.489 [2024-07-15 18:49:00.703997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.489 [2024-07-15 18:49:00.704017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.489 [2024-07-15 18:49:00.704037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.489 [2024-07-15 18:49:00.704057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.489 [2024-07-15 18:49:00.704595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.489 [2024-07-15 18:49:00.704602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.704989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.704996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.490 [2024-07-15 18:49:00.705164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.490 [2024-07-15 18:49:00.705184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.490 [2024-07-15 18:49:00.705204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.490 [2024-07-15 18:49:00.705229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.490 [2024-07-15 18:49:00.705251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.705284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.705292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.708906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.708916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.708930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.708937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.708950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.708957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.708970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.708978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.708991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.708999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.490 [2024-07-15 18:49:00.709404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.490 [2024-07-15 18:49:00.709416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.709991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.709999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.491 [2024-07-15 18:49:00.710263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.491 [2024-07-15 18:49:00.710270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.492 [2024-07-15 18:49:00.710717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.492 [2024-07-15 18:49:00.710739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.492 [2024-07-15 18:49:00.710760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.492 [2024-07-15 18:49:00.710780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.492 [2024-07-15 18:49:00.710800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.492 [2024-07-15 18:49:00.710820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.492 [2024-07-15 18:49:00.710841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.710983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.710995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.711003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.711016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.711023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.711035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.711043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.711055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.711076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.711084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.711096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.492 [2024-07-15 18:49:00.711103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.492 [2024-07-15 18:49:00.711118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.711366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.711374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.493 [2024-07-15 18:49:00.712507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.493 [2024-07-15 18:49:00.712530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.493 [2024-07-15 18:49:00.712551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.493 [2024-07-15 18:49:00.712572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.493 [2024-07-15 18:49:00.712592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.712707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.712715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.713523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.713535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.713562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.493 [2024-07-15 18:49:00.713571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.493 [2024-07-15 18:49:00.713585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.713979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.713987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.494 [2024-07-15 18:49:00.714553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.494 [2024-07-15 18:49:00.714562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.714977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.714992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.495 [2024-07-15 18:49:00.715398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.495 [2024-07-15 18:49:00.715423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.495 [2024-07-15 18:49:00.715450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.495 [2024-07-15 18:49:00.715475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.495 [2024-07-15 18:49:00.715501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.495 [2024-07-15 18:49:00.715527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.495 [2024-07-15 18:49:00.715552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.495 [2024-07-15 18:49:00.715655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.495 [2024-07-15 18:49:00.715673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.715982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.715990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:00.716212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:00.716220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.543886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.543925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.543959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.543967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.543980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.543987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.543999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.544262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.544269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.545459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.545480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.545501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.496 [2024-07-15 18:49:13.545509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.546371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.496 [2024-07-15 18:49:13.546388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.496 [2024-07-15 18:49:13.546405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.497 [2024-07-15 18:49:13.546412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.497 [2024-07-15 18:49:13.546425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.497 [2024-07-15 18:49:13.546432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.497 [2024-07-15 18:49:13.546445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.497 [2024-07-15 18:49:13.546452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.497 [2024-07-15 18:49:13.546465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.497 [2024-07-15 18:49:13.546472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.497 [2024-07-15 18:49:13.546496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.497 [2024-07-15 18:49:13.546504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.497 [2024-07-15 18:49:13.546516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.497 [2024-07-15 18:49:13.546523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.497 Received shutdown signal, test time was about 27.119421 seconds 00:23:59.497 00:23:59.497 Latency(us) 00:23:59.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.497 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:59.497 Verification LBA range: start 0x0 length 0x4000 00:23:59.497 Nvme0n1 : 27.12 10297.19 40.22 0.00 0.00 12410.71 247.54 3078254.41 00:23:59.497 =================================================================================================================== 00:23:59.497 Total : 10297.19 40.22 0.00 0.00 12410.71 247.54 3078254.41 00:23:59.497 18:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.497 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.497 rmmod nvme_tcp 00:23:59.497 rmmod nvme_fabrics 00:23:59.497 rmmod nvme_keyring 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1198923 ']' 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1198923 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1198923 ']' 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1198923 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1198923 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1198923' 00:23:59.756 killing process with pid 1198923 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1198923 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1198923 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.756 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.757 18:49:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.341 18:49:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.341 00:24:02.341 real 0m38.014s 00:24:02.341 user 1m43.736s 00:24:02.341 sys 0m10.179s 00:24:02.341 18:49:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.341 18:49:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:02.341 ************************************ 00:24:02.341 END TEST nvmf_host_multipath_status 00:24:02.341 ************************************ 00:24:02.341 18:49:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.341 18:49:18 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:02.341 18:49:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.341 18:49:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.341 18:49:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.341 ************************************ 00:24:02.341 START TEST nvmf_discovery_remove_ifc 00:24:02.341 ************************************ 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:02.341 * Looking for test storage... 00:24:02.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:02.341 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.342 18:49:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:07.619 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:07.619 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:07.619 Found net devices under 0000:86:00.0: cvl_0_0 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:07.619 Found net devices under 0000:86:00.1: cvl_0_1 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:24:07.619 00:24:07.619 --- 10.0.0.2 ping statistics --- 00:24:07.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.619 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:24:07.619 00:24:07.619 --- 10.0.0.1 ping statistics --- 00:24:07.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.619 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.619 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1207479 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1207479 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1207479 ']' 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.620 18:49:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.620 [2024-07-15 18:49:23.650248] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:24:07.620 [2024-07-15 18:49:23.650292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.620 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.620 [2024-07-15 18:49:23.706431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.620 [2024-07-15 18:49:23.785016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.620 [2024-07-15 18:49:23.785048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.620 [2024-07-15 18:49:23.785055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.620 [2024-07-15 18:49:23.785061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.620 [2024-07-15 18:49:23.785066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.620 [2024-07-15 18:49:23.785083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.879 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 [2024-07-15 18:49:24.483799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.880 [2024-07-15 18:49:24.491927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:07.880 null0 00:24:07.880 [2024-07-15 18:49:24.523939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1207553 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1207553 /tmp/host.sock 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1207553 ']' 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:07.880 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.880 18:49:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.880 [2024-07-15 18:49:24.573472] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:24:07.880 [2024-07-15 18:49:24.573519] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207553 ] 00:24:08.138 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.138 [2024-07-15 18:49:24.626352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.139 [2024-07-15 18:49:24.704983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.706 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.706 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:08.706 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.706 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:08.706 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.706 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.965 18:49:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.901 [2024-07-15 18:49:26.552389] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:09.901 [2024-07-15 18:49:26.552409] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:09.901 [2024-07-15 18:49:26.552424] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:10.160 [2024-07-15 18:49:26.638683] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:10.160 [2024-07-15 18:49:26.817131] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:10.160 [2024-07-15 18:49:26.817173] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:10.160 [2024-07-15 18:49:26.817193] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:10.160 [2024-07-15 18:49:26.817205] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:10.160 [2024-07-15 18:49:26.817229] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.160 [2024-07-15 18:49:26.821050] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1700e30 was disconnected and freed. delete nvme_qpair. 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.160 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.419 18:49:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.419 18:49:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:10.419 18:49:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.354 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.613 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:11.613 18:49:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.548 18:49:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.485 18:49:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.861 18:49:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.794 [2024-07-15 18:49:32.258444] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:15.794 [2024-07-15 18:49:32.258485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.794 [2024-07-15 18:49:32.258496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.794 [2024-07-15 18:49:32.258505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.794 [2024-07-15 18:49:32.258512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.794 [2024-07-15 18:49:32.258520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.794 [2024-07-15 18:49:32.258526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.794 [2024-07-15 18:49:32.258534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.794 [2024-07-15 18:49:32.258541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.794 [2024-07-15 18:49:32.258548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.794 [2024-07-15 18:49:32.258556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.794 [2024-07-15 18:49:32.258562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7690 is same with the state(5) to be set 00:24:15.794 [2024-07-15 18:49:32.268466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c7690 (9): Bad file descriptor 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.794 18:49:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.794 [2024-07-15 18:49:32.278505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.730 [2024-07-15 18:49:33.282244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:16.730 [2024-07-15 18:49:33.282284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c7690 with addr=10.0.0.2, port=4420 00:24:16.730 [2024-07-15 18:49:33.282298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7690 is same with the state(5) to be set 00:24:16.730 [2024-07-15 18:49:33.282322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c7690 (9): Bad file descriptor 00:24:16.730 [2024-07-15 18:49:33.282365] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.730 [2024-07-15 18:49:33.282383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:16.730 [2024-07-15 18:49:33.282392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:16.730 [2024-07-15 18:49:33.282402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:16.730 [2024-07-15 18:49:33.282420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.730 [2024-07-15 18:49:33.282432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:16.730 18:49:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.668 [2024-07-15 18:49:34.284910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:17.668 [2024-07-15 18:49:34.284933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:17.668 [2024-07-15 18:49:34.284940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:17.668 [2024-07-15 18:49:34.284947] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:17.668 [2024-07-15 18:49:34.284958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.668 [2024-07-15 18:49:34.284977] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:17.668 [2024-07-15 18:49:34.284995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.668 [2024-07-15 18:49:34.285004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 18:49:34.285012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.668 [2024-07-15 18:49:34.285023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 18:49:34.285030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.668 [2024-07-15 18:49:34.285037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 18:49:34.285044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.668 [2024-07-15 18:49:34.285050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 18:49:34.285057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.668 [2024-07-15 18:49:34.285064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.668 [2024-07-15 18:49:34.285070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:17.668 [2024-07-15 18:49:34.285091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c6a80 (9): Bad file descriptor 00:24:17.668 [2024-07-15 18:49:34.286091] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:17.668 [2024-07-15 18:49:34.286102] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.668 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:17.927 18:49:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:18.864 18:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:19.802 [2024-07-15 18:49:36.296960] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:19.802 [2024-07-15 18:49:36.296978] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:19.802 [2024-07-15 18:49:36.296990] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.802 [2024-07-15 18:49:36.426404] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.062 [2024-07-15 18:49:36.610118] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:20.062 [2024-07-15 18:49:36.610153] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:20.062 [2024-07-15 18:49:36.610172] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:20.062 [2024-07-15 18:49:36.610184] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:20.062 [2024-07-15 18:49:36.610191] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:20.062 18:49:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.062 [2024-07-15 18:49:36.616473] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16dd8d0 was disconnected and freed. delete nvme_qpair. 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1207553 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1207553 ']' 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1207553 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.999 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207553 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207553' 00:24:21.260 killing process with pid 1207553 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1207553 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1207553 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.260 rmmod nvme_tcp 00:24:21.260 rmmod nvme_fabrics 00:24:21.260 rmmod nvme_keyring 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1207479 ']' 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1207479 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1207479 ']' 00:24:21.260 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1207479 00:24:21.519 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:21.519 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.519 18:49:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207479 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207479' 00:24:21.519 killing process with pid 1207479 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1207479 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1207479 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.519 18:49:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.118 18:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:24.118 00:24:24.118 real 0m21.722s 00:24:24.118 user 0m28.752s 00:24:24.118 sys 0m5.082s 00:24:24.118 18:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:24.118 18:49:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.118 ************************************ 00:24:24.118 END TEST nvmf_discovery_remove_ifc 00:24:24.118 ************************************ 00:24:24.118 18:49:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:24.118 18:49:40 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:24.118 18:49:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:24.118 18:49:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.118 18:49:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.118 ************************************ 00:24:24.118 START TEST nvmf_identify_kernel_target 00:24:24.118 ************************************ 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:24.118 * Looking for test storage... 00:24:24.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.118 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.119 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.119 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.119 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:24.119 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:24.119 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:24.119 18:49:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:29.394 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:29.394 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:29.394 Found net devices under 0000:86:00.0: cvl_0_0 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:29.394 Found net devices under 0000:86:00.1: cvl_0_1 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:29.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:24:29.394 00:24:29.394 --- 10.0.0.2 ping statistics --- 00:24:29.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.394 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:29.394 00:24:29.394 --- 10.0.0.1 ping statistics --- 00:24:29.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.394 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:29.394 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:29.395 18:49:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:31.925 Waiting for block devices as requested 00:24:31.925 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:31.925 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:31.925 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:31.925 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:31.925 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:32.183 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:32.183 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:32.183 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:32.183 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:32.441 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:32.441 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:32.441 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:32.700 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:32.700 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:32.700 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:32.700 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:32.958 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:32.958 No valid GPT data, bailing 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:32.958 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:33.219 00:24:33.219 Discovery Log Number of Records 2, Generation counter 2 00:24:33.219 =====Discovery Log Entry 0====== 00:24:33.219 trtype: tcp 00:24:33.219 adrfam: ipv4 00:24:33.219 subtype: current discovery subsystem 00:24:33.219 treq: not specified, sq flow control disable supported 00:24:33.219 portid: 1 00:24:33.219 trsvcid: 4420 00:24:33.219 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:33.219 traddr: 10.0.0.1 00:24:33.219 eflags: none 00:24:33.219 sectype: none 00:24:33.219 =====Discovery Log Entry 1====== 00:24:33.219 trtype: tcp 00:24:33.219 adrfam: ipv4 00:24:33.219 subtype: nvme subsystem 00:24:33.219 treq: not specified, sq flow control disable supported 00:24:33.219 portid: 1 00:24:33.219 trsvcid: 4420 00:24:33.219 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:33.219 traddr: 10.0.0.1 00:24:33.219 eflags: none 00:24:33.219 sectype: none 00:24:33.219 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:33.219 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:33.219 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.219 ===================================================== 00:24:33.219 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:33.219 ===================================================== 00:24:33.219 Controller Capabilities/Features 00:24:33.219 ================================ 00:24:33.219 Vendor ID: 0000 00:24:33.219 Subsystem Vendor ID: 0000 00:24:33.219 Serial Number: 5a847e82b9d5a1747914 00:24:33.219 Model Number: Linux 00:24:33.219 Firmware Version: 6.7.0-68 00:24:33.219 Recommended Arb Burst: 0 00:24:33.219 IEEE OUI Identifier: 00 00 00 00:24:33.219 Multi-path I/O 00:24:33.219 May have multiple subsystem ports: No 00:24:33.219 May have multiple controllers: No 00:24:33.219 Associated with SR-IOV VF: No 00:24:33.219 Max Data Transfer Size: Unlimited 00:24:33.219 Max Number of Namespaces: 0 00:24:33.219 Max Number of I/O Queues: 1024 00:24:33.219 NVMe Specification Version (VS): 1.3 00:24:33.219 NVMe Specification Version (Identify): 1.3 00:24:33.219 Maximum Queue Entries: 1024 00:24:33.219 Contiguous Queues Required: No 00:24:33.219 Arbitration Mechanisms Supported 00:24:33.219 Weighted Round Robin: Not Supported 00:24:33.219 Vendor Specific: Not Supported 00:24:33.219 Reset Timeout: 7500 ms 00:24:33.219 Doorbell Stride: 4 bytes 00:24:33.219 NVM Subsystem Reset: Not Supported 00:24:33.219 Command Sets Supported 00:24:33.219 NVM Command Set: Supported 00:24:33.219 Boot Partition: Not Supported 00:24:33.219 Memory Page Size Minimum: 4096 bytes 00:24:33.219 Memory Page Size Maximum: 4096 bytes 00:24:33.219 Persistent Memory Region: Not Supported 00:24:33.219 Optional Asynchronous Events Supported 00:24:33.219 Namespace Attribute Notices: Not Supported 00:24:33.219 Firmware Activation Notices: Not Supported 00:24:33.219 ANA Change Notices: Not Supported 00:24:33.219 PLE Aggregate Log Change Notices: Not Supported 00:24:33.219 LBA Status Info Alert Notices: Not Supported 00:24:33.219 EGE Aggregate Log Change Notices: Not Supported 00:24:33.219 Normal NVM Subsystem Shutdown event: Not Supported 00:24:33.219 Zone Descriptor Change Notices: Not Supported 00:24:33.219 Discovery Log Change Notices: Supported 00:24:33.219 Controller Attributes 00:24:33.219 128-bit Host Identifier: Not Supported 00:24:33.219 Non-Operational Permissive Mode: Not Supported 00:24:33.219 NVM Sets: Not Supported 00:24:33.219 Read Recovery Levels: Not Supported 00:24:33.219 Endurance Groups: Not Supported 00:24:33.219 Predictable Latency Mode: Not Supported 00:24:33.219 Traffic Based Keep ALive: Not Supported 00:24:33.219 Namespace Granularity: Not Supported 00:24:33.219 SQ Associations: Not Supported 00:24:33.219 UUID List: Not Supported 00:24:33.219 Multi-Domain Subsystem: Not Supported 00:24:33.219 Fixed Capacity Management: Not Supported 00:24:33.219 Variable Capacity Management: Not Supported 00:24:33.219 Delete Endurance Group: Not Supported 00:24:33.219 Delete NVM Set: Not Supported 00:24:33.219 Extended LBA Formats Supported: Not Supported 00:24:33.219 Flexible Data Placement Supported: Not Supported 00:24:33.219 00:24:33.219 Controller Memory Buffer Support 00:24:33.219 ================================ 00:24:33.219 Supported: No 00:24:33.219 00:24:33.219 Persistent Memory Region Support 00:24:33.219 ================================ 00:24:33.219 Supported: No 00:24:33.219 00:24:33.219 Admin Command Set Attributes 00:24:33.219 ============================ 00:24:33.219 Security Send/Receive: Not Supported 00:24:33.219 Format NVM: Not Supported 00:24:33.219 Firmware Activate/Download: Not Supported 00:24:33.219 Namespace Management: Not Supported 00:24:33.219 Device Self-Test: Not Supported 00:24:33.219 Directives: Not Supported 00:24:33.219 NVMe-MI: Not Supported 00:24:33.219 Virtualization Management: Not Supported 00:24:33.219 Doorbell Buffer Config: Not Supported 00:24:33.219 Get LBA Status Capability: Not Supported 00:24:33.219 Command & Feature Lockdown Capability: Not Supported 00:24:33.219 Abort Command Limit: 1 00:24:33.219 Async Event Request Limit: 1 00:24:33.219 Number of Firmware Slots: N/A 00:24:33.219 Firmware Slot 1 Read-Only: N/A 00:24:33.219 Firmware Activation Without Reset: N/A 00:24:33.219 Multiple Update Detection Support: N/A 00:24:33.219 Firmware Update Granularity: No Information Provided 00:24:33.219 Per-Namespace SMART Log: No 00:24:33.219 Asymmetric Namespace Access Log Page: Not Supported 00:24:33.219 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:33.219 Command Effects Log Page: Not Supported 00:24:33.219 Get Log Page Extended Data: Supported 00:24:33.219 Telemetry Log Pages: Not Supported 00:24:33.219 Persistent Event Log Pages: Not Supported 00:24:33.219 Supported Log Pages Log Page: May Support 00:24:33.219 Commands Supported & Effects Log Page: Not Supported 00:24:33.219 Feature Identifiers & Effects Log Page:May Support 00:24:33.219 NVMe-MI Commands & Effects Log Page: May Support 00:24:33.219 Data Area 4 for Telemetry Log: Not Supported 00:24:33.219 Error Log Page Entries Supported: 1 00:24:33.219 Keep Alive: Not Supported 00:24:33.219 00:24:33.219 NVM Command Set Attributes 00:24:33.219 ========================== 00:24:33.219 Submission Queue Entry Size 00:24:33.219 Max: 1 00:24:33.219 Min: 1 00:24:33.219 Completion Queue Entry Size 00:24:33.219 Max: 1 00:24:33.219 Min: 1 00:24:33.219 Number of Namespaces: 0 00:24:33.219 Compare Command: Not Supported 00:24:33.219 Write Uncorrectable Command: Not Supported 00:24:33.219 Dataset Management Command: Not Supported 00:24:33.219 Write Zeroes Command: Not Supported 00:24:33.219 Set Features Save Field: Not Supported 00:24:33.219 Reservations: Not Supported 00:24:33.219 Timestamp: Not Supported 00:24:33.219 Copy: Not Supported 00:24:33.219 Volatile Write Cache: Not Present 00:24:33.219 Atomic Write Unit (Normal): 1 00:24:33.219 Atomic Write Unit (PFail): 1 00:24:33.219 Atomic Compare & Write Unit: 1 00:24:33.219 Fused Compare & Write: Not Supported 00:24:33.219 Scatter-Gather List 00:24:33.219 SGL Command Set: Supported 00:24:33.219 SGL Keyed: Not Supported 00:24:33.219 SGL Bit Bucket Descriptor: Not Supported 00:24:33.219 SGL Metadata Pointer: Not Supported 00:24:33.219 Oversized SGL: Not Supported 00:24:33.219 SGL Metadata Address: Not Supported 00:24:33.219 SGL Offset: Supported 00:24:33.219 Transport SGL Data Block: Not Supported 00:24:33.219 Replay Protected Memory Block: Not Supported 00:24:33.219 00:24:33.219 Firmware Slot Information 00:24:33.219 ========================= 00:24:33.219 Active slot: 0 00:24:33.219 00:24:33.219 00:24:33.219 Error Log 00:24:33.219 ========= 00:24:33.219 00:24:33.219 Active Namespaces 00:24:33.219 ================= 00:24:33.219 Discovery Log Page 00:24:33.219 ================== 00:24:33.219 Generation Counter: 2 00:24:33.219 Number of Records: 2 00:24:33.219 Record Format: 0 00:24:33.219 00:24:33.219 Discovery Log Entry 0 00:24:33.219 ---------------------- 00:24:33.219 Transport Type: 3 (TCP) 00:24:33.219 Address Family: 1 (IPv4) 00:24:33.219 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:33.219 Entry Flags: 00:24:33.219 Duplicate Returned Information: 0 00:24:33.219 Explicit Persistent Connection Support for Discovery: 0 00:24:33.219 Transport Requirements: 00:24:33.219 Secure Channel: Not Specified 00:24:33.219 Port ID: 1 (0x0001) 00:24:33.219 Controller ID: 65535 (0xffff) 00:24:33.219 Admin Max SQ Size: 32 00:24:33.219 Transport Service Identifier: 4420 00:24:33.219 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:33.219 Transport Address: 10.0.0.1 00:24:33.219 Discovery Log Entry 1 00:24:33.219 ---------------------- 00:24:33.219 Transport Type: 3 (TCP) 00:24:33.219 Address Family: 1 (IPv4) 00:24:33.219 Subsystem Type: 2 (NVM Subsystem) 00:24:33.220 Entry Flags: 00:24:33.220 Duplicate Returned Information: 0 00:24:33.220 Explicit Persistent Connection Support for Discovery: 0 00:24:33.220 Transport Requirements: 00:24:33.220 Secure Channel: Not Specified 00:24:33.220 Port ID: 1 (0x0001) 00:24:33.220 Controller ID: 65535 (0xffff) 00:24:33.220 Admin Max SQ Size: 32 00:24:33.220 Transport Service Identifier: 4420 00:24:33.220 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:33.220 Transport Address: 10.0.0.1 00:24:33.220 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:33.220 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.220 get_feature(0x01) failed 00:24:33.220 get_feature(0x02) failed 00:24:33.220 get_feature(0x04) failed 00:24:33.220 ===================================================== 00:24:33.220 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:33.220 ===================================================== 00:24:33.220 Controller Capabilities/Features 00:24:33.220 ================================ 00:24:33.220 Vendor ID: 0000 00:24:33.220 Subsystem Vendor ID: 0000 00:24:33.220 Serial Number: 6f804042d61f42e6752f 00:24:33.220 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:33.220 Firmware Version: 6.7.0-68 00:24:33.220 Recommended Arb Burst: 6 00:24:33.220 IEEE OUI Identifier: 00 00 00 00:24:33.220 Multi-path I/O 00:24:33.220 May have multiple subsystem ports: Yes 00:24:33.220 May have multiple controllers: Yes 00:24:33.220 Associated with SR-IOV VF: No 00:24:33.220 Max Data Transfer Size: Unlimited 00:24:33.220 Max Number of Namespaces: 1024 00:24:33.220 Max Number of I/O Queues: 128 00:24:33.220 NVMe Specification Version (VS): 1.3 00:24:33.220 NVMe Specification Version (Identify): 1.3 00:24:33.220 Maximum Queue Entries: 1024 00:24:33.220 Contiguous Queues Required: No 00:24:33.220 Arbitration Mechanisms Supported 00:24:33.220 Weighted Round Robin: Not Supported 00:24:33.220 Vendor Specific: Not Supported 00:24:33.220 Reset Timeout: 7500 ms 00:24:33.220 Doorbell Stride: 4 bytes 00:24:33.220 NVM Subsystem Reset: Not Supported 00:24:33.220 Command Sets Supported 00:24:33.220 NVM Command Set: Supported 00:24:33.220 Boot Partition: Not Supported 00:24:33.220 Memory Page Size Minimum: 4096 bytes 00:24:33.220 Memory Page Size Maximum: 4096 bytes 00:24:33.220 Persistent Memory Region: Not Supported 00:24:33.220 Optional Asynchronous Events Supported 00:24:33.220 Namespace Attribute Notices: Supported 00:24:33.220 Firmware Activation Notices: Not Supported 00:24:33.220 ANA Change Notices: Supported 00:24:33.220 PLE Aggregate Log Change Notices: Not Supported 00:24:33.220 LBA Status Info Alert Notices: Not Supported 00:24:33.220 EGE Aggregate Log Change Notices: Not Supported 00:24:33.220 Normal NVM Subsystem Shutdown event: Not Supported 00:24:33.220 Zone Descriptor Change Notices: Not Supported 00:24:33.220 Discovery Log Change Notices: Not Supported 00:24:33.220 Controller Attributes 00:24:33.220 128-bit Host Identifier: Supported 00:24:33.220 Non-Operational Permissive Mode: Not Supported 00:24:33.220 NVM Sets: Not Supported 00:24:33.220 Read Recovery Levels: Not Supported 00:24:33.220 Endurance Groups: Not Supported 00:24:33.220 Predictable Latency Mode: Not Supported 00:24:33.220 Traffic Based Keep ALive: Supported 00:24:33.220 Namespace Granularity: Not Supported 00:24:33.220 SQ Associations: Not Supported 00:24:33.220 UUID List: Not Supported 00:24:33.220 Multi-Domain Subsystem: Not Supported 00:24:33.220 Fixed Capacity Management: Not Supported 00:24:33.220 Variable Capacity Management: Not Supported 00:24:33.220 Delete Endurance Group: Not Supported 00:24:33.220 Delete NVM Set: Not Supported 00:24:33.220 Extended LBA Formats Supported: Not Supported 00:24:33.220 Flexible Data Placement Supported: Not Supported 00:24:33.220 00:24:33.220 Controller Memory Buffer Support 00:24:33.220 ================================ 00:24:33.220 Supported: No 00:24:33.220 00:24:33.220 Persistent Memory Region Support 00:24:33.220 ================================ 00:24:33.220 Supported: No 00:24:33.220 00:24:33.220 Admin Command Set Attributes 00:24:33.220 ============================ 00:24:33.220 Security Send/Receive: Not Supported 00:24:33.220 Format NVM: Not Supported 00:24:33.220 Firmware Activate/Download: Not Supported 00:24:33.220 Namespace Management: Not Supported 00:24:33.220 Device Self-Test: Not Supported 00:24:33.220 Directives: Not Supported 00:24:33.220 NVMe-MI: Not Supported 00:24:33.220 Virtualization Management: Not Supported 00:24:33.220 Doorbell Buffer Config: Not Supported 00:24:33.220 Get LBA Status Capability: Not Supported 00:24:33.220 Command & Feature Lockdown Capability: Not Supported 00:24:33.220 Abort Command Limit: 4 00:24:33.220 Async Event Request Limit: 4 00:24:33.220 Number of Firmware Slots: N/A 00:24:33.220 Firmware Slot 1 Read-Only: N/A 00:24:33.220 Firmware Activation Without Reset: N/A 00:24:33.220 Multiple Update Detection Support: N/A 00:24:33.220 Firmware Update Granularity: No Information Provided 00:24:33.220 Per-Namespace SMART Log: Yes 00:24:33.220 Asymmetric Namespace Access Log Page: Supported 00:24:33.220 ANA Transition Time : 10 sec 00:24:33.220 00:24:33.220 Asymmetric Namespace Access Capabilities 00:24:33.220 ANA Optimized State : Supported 00:24:33.220 ANA Non-Optimized State : Supported 00:24:33.220 ANA Inaccessible State : Supported 00:24:33.220 ANA Persistent Loss State : Supported 00:24:33.220 ANA Change State : Supported 00:24:33.220 ANAGRPID is not changed : No 00:24:33.220 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:33.220 00:24:33.220 ANA Group Identifier Maximum : 128 00:24:33.220 Number of ANA Group Identifiers : 128 00:24:33.220 Max Number of Allowed Namespaces : 1024 00:24:33.220 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:33.220 Command Effects Log Page: Supported 00:24:33.220 Get Log Page Extended Data: Supported 00:24:33.220 Telemetry Log Pages: Not Supported 00:24:33.220 Persistent Event Log Pages: Not Supported 00:24:33.220 Supported Log Pages Log Page: May Support 00:24:33.220 Commands Supported & Effects Log Page: Not Supported 00:24:33.220 Feature Identifiers & Effects Log Page:May Support 00:24:33.220 NVMe-MI Commands & Effects Log Page: May Support 00:24:33.220 Data Area 4 for Telemetry Log: Not Supported 00:24:33.220 Error Log Page Entries Supported: 128 00:24:33.220 Keep Alive: Supported 00:24:33.220 Keep Alive Granularity: 1000 ms 00:24:33.220 00:24:33.220 NVM Command Set Attributes 00:24:33.220 ========================== 00:24:33.220 Submission Queue Entry Size 00:24:33.220 Max: 64 00:24:33.220 Min: 64 00:24:33.220 Completion Queue Entry Size 00:24:33.220 Max: 16 00:24:33.220 Min: 16 00:24:33.220 Number of Namespaces: 1024 00:24:33.220 Compare Command: Not Supported 00:24:33.220 Write Uncorrectable Command: Not Supported 00:24:33.220 Dataset Management Command: Supported 00:24:33.220 Write Zeroes Command: Supported 00:24:33.220 Set Features Save Field: Not Supported 00:24:33.220 Reservations: Not Supported 00:24:33.220 Timestamp: Not Supported 00:24:33.220 Copy: Not Supported 00:24:33.220 Volatile Write Cache: Present 00:24:33.220 Atomic Write Unit (Normal): 1 00:24:33.220 Atomic Write Unit (PFail): 1 00:24:33.220 Atomic Compare & Write Unit: 1 00:24:33.220 Fused Compare & Write: Not Supported 00:24:33.220 Scatter-Gather List 00:24:33.220 SGL Command Set: Supported 00:24:33.220 SGL Keyed: Not Supported 00:24:33.220 SGL Bit Bucket Descriptor: Not Supported 00:24:33.220 SGL Metadata Pointer: Not Supported 00:24:33.220 Oversized SGL: Not Supported 00:24:33.220 SGL Metadata Address: Not Supported 00:24:33.220 SGL Offset: Supported 00:24:33.220 Transport SGL Data Block: Not Supported 00:24:33.220 Replay Protected Memory Block: Not Supported 00:24:33.220 00:24:33.220 Firmware Slot Information 00:24:33.220 ========================= 00:24:33.220 Active slot: 0 00:24:33.220 00:24:33.220 Asymmetric Namespace Access 00:24:33.220 =========================== 00:24:33.220 Change Count : 0 00:24:33.220 Number of ANA Group Descriptors : 1 00:24:33.220 ANA Group Descriptor : 0 00:24:33.220 ANA Group ID : 1 00:24:33.220 Number of NSID Values : 1 00:24:33.220 Change Count : 0 00:24:33.220 ANA State : 1 00:24:33.220 Namespace Identifier : 1 00:24:33.220 00:24:33.220 Commands Supported and Effects 00:24:33.220 ============================== 00:24:33.220 Admin Commands 00:24:33.220 -------------- 00:24:33.220 Get Log Page (02h): Supported 00:24:33.220 Identify (06h): Supported 00:24:33.220 Abort (08h): Supported 00:24:33.220 Set Features (09h): Supported 00:24:33.220 Get Features (0Ah): Supported 00:24:33.220 Asynchronous Event Request (0Ch): Supported 00:24:33.220 Keep Alive (18h): Supported 00:24:33.220 I/O Commands 00:24:33.220 ------------ 00:24:33.220 Flush (00h): Supported 00:24:33.220 Write (01h): Supported LBA-Change 00:24:33.220 Read (02h): Supported 00:24:33.220 Write Zeroes (08h): Supported LBA-Change 00:24:33.220 Dataset Management (09h): Supported 00:24:33.220 00:24:33.220 Error Log 00:24:33.220 ========= 00:24:33.220 Entry: 0 00:24:33.220 Error Count: 0x3 00:24:33.221 Submission Queue Id: 0x0 00:24:33.221 Command Id: 0x5 00:24:33.221 Phase Bit: 0 00:24:33.221 Status Code: 0x2 00:24:33.221 Status Code Type: 0x0 00:24:33.221 Do Not Retry: 1 00:24:33.221 Error Location: 0x28 00:24:33.221 LBA: 0x0 00:24:33.221 Namespace: 0x0 00:24:33.221 Vendor Log Page: 0x0 00:24:33.221 ----------- 00:24:33.221 Entry: 1 00:24:33.221 Error Count: 0x2 00:24:33.221 Submission Queue Id: 0x0 00:24:33.221 Command Id: 0x5 00:24:33.221 Phase Bit: 0 00:24:33.221 Status Code: 0x2 00:24:33.221 Status Code Type: 0x0 00:24:33.221 Do Not Retry: 1 00:24:33.221 Error Location: 0x28 00:24:33.221 LBA: 0x0 00:24:33.221 Namespace: 0x0 00:24:33.221 Vendor Log Page: 0x0 00:24:33.221 ----------- 00:24:33.221 Entry: 2 00:24:33.221 Error Count: 0x1 00:24:33.221 Submission Queue Id: 0x0 00:24:33.221 Command Id: 0x4 00:24:33.221 Phase Bit: 0 00:24:33.221 Status Code: 0x2 00:24:33.221 Status Code Type: 0x0 00:24:33.221 Do Not Retry: 1 00:24:33.221 Error Location: 0x28 00:24:33.221 LBA: 0x0 00:24:33.221 Namespace: 0x0 00:24:33.221 Vendor Log Page: 0x0 00:24:33.221 00:24:33.221 Number of Queues 00:24:33.221 ================ 00:24:33.221 Number of I/O Submission Queues: 128 00:24:33.221 Number of I/O Completion Queues: 128 00:24:33.221 00:24:33.221 ZNS Specific Controller Data 00:24:33.221 ============================ 00:24:33.221 Zone Append Size Limit: 0 00:24:33.221 00:24:33.221 00:24:33.221 Active Namespaces 00:24:33.221 ================= 00:24:33.221 get_feature(0x05) failed 00:24:33.221 Namespace ID:1 00:24:33.221 Command Set Identifier: NVM (00h) 00:24:33.221 Deallocate: Supported 00:24:33.221 Deallocated/Unwritten Error: Not Supported 00:24:33.221 Deallocated Read Value: Unknown 00:24:33.221 Deallocate in Write Zeroes: Not Supported 00:24:33.221 Deallocated Guard Field: 0xFFFF 00:24:33.221 Flush: Supported 00:24:33.221 Reservation: Not Supported 00:24:33.221 Namespace Sharing Capabilities: Multiple Controllers 00:24:33.221 Size (in LBAs): 1953525168 (931GiB) 00:24:33.221 Capacity (in LBAs): 1953525168 (931GiB) 00:24:33.221 Utilization (in LBAs): 1953525168 (931GiB) 00:24:33.221 UUID: 193167be-9cc1-4f00-a6cb-17044e897010 00:24:33.221 Thin Provisioning: Not Supported 00:24:33.221 Per-NS Atomic Units: Yes 00:24:33.221 Atomic Boundary Size (Normal): 0 00:24:33.221 Atomic Boundary Size (PFail): 0 00:24:33.221 Atomic Boundary Offset: 0 00:24:33.221 NGUID/EUI64 Never Reused: No 00:24:33.221 ANA group ID: 1 00:24:33.221 Namespace Write Protected: No 00:24:33.221 Number of LBA Formats: 1 00:24:33.221 Current LBA Format: LBA Format #00 00:24:33.221 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:33.221 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.221 rmmod nvme_tcp 00:24:33.221 rmmod nvme_fabrics 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.221 18:49:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:35.752 18:49:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:35.752 18:49:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:38.276 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:38.276 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:39.207 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:39.207 00:24:39.207 real 0m15.431s 00:24:39.207 user 0m3.933s 00:24:39.207 sys 0m7.900s 00:24:39.207 18:49:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.207 18:49:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.207 ************************************ 00:24:39.207 END TEST nvmf_identify_kernel_target 00:24:39.207 ************************************ 00:24:39.207 18:49:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:39.207 18:49:55 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:39.207 18:49:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:39.207 18:49:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.207 18:49:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.207 ************************************ 00:24:39.207 START TEST nvmf_auth_host 00:24:39.207 ************************************ 00:24:39.207 18:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:39.207 * Looking for test storage... 00:24:39.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.465 18:49:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.466 18:49:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:44.730 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.730 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:44.731 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:44.731 Found net devices under 0000:86:00.0: cvl_0_0 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:44.731 Found net devices under 0000:86:00.1: cvl_0_1 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:24:44.731 00:24:44.731 --- 10.0.0.2 ping statistics --- 00:24:44.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.731 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:44.731 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:24:44.990 00:24:44.990 --- 10.0.0.1 ping statistics --- 00:24:44.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.990 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1219482 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1219482 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1219482 ']' 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.990 18:50:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.611 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.611 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:45.611 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.611 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.611 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e647756c04c99771bf1f5e06ff7f762 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Goi 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e647756c04c99771bf1f5e06ff7f762 0 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e647756c04c99771bf1f5e06ff7f762 0 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e647756c04c99771bf1f5e06ff7f762 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Goi 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Goi 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Goi 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=44a216c1ba8742a8e9d3fe1d4678ef49d17801abdedb60a6710e9bedbc7aa771 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BEA 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 44a216c1ba8742a8e9d3fe1d4678ef49d17801abdedb60a6710e9bedbc7aa771 3 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 44a216c1ba8742a8e9d3fe1d4678ef49d17801abdedb60a6710e9bedbc7aa771 3 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=44a216c1ba8742a8e9d3fe1d4678ef49d17801abdedb60a6710e9bedbc7aa771 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BEA 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BEA 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.BEA 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c9094a94444b0d3bfd4adf9800521d239338475636743bfd 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.G1H 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c9094a94444b0d3bfd4adf9800521d239338475636743bfd 0 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c9094a94444b0d3bfd4adf9800521d239338475636743bfd 0 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c9094a94444b0d3bfd4adf9800521d239338475636743bfd 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.G1H 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.G1H 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.G1H 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b26dc7ae38af2a64f232d6e20df1ff150e8db9935f37f810 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.N2U 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b26dc7ae38af2a64f232d6e20df1ff150e8db9935f37f810 2 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b26dc7ae38af2a64f232d6e20df1ff150e8db9935f37f810 2 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b26dc7ae38af2a64f232d6e20df1ff150e8db9935f37f810 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:45.870 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.N2U 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.N2U 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.N2U 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=60944ce827e4e4827f06b905e01c4a27 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.59j 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 60944ce827e4e4827f06b905e01c4a27 1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 60944ce827e4e4827f06b905e01c4a27 1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=60944ce827e4e4827f06b905e01c4a27 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.59j 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.59j 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.59j 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c4b52f34aa8f284dd5ba68b7070e58b 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZLy 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c4b52f34aa8f284dd5ba68b7070e58b 1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c4b52f34aa8f284dd5ba68b7070e58b 1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c4b52f34aa8f284dd5ba68b7070e58b 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZLy 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZLy 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZLy 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b268c6cd940a021a0ab00662ed22876dcc4d01944ff2462 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hMx 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b268c6cd940a021a0ab00662ed22876dcc4d01944ff2462 2 00:24:46.129 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b268c6cd940a021a0ab00662ed22876dcc4d01944ff2462 2 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b268c6cd940a021a0ab00662ed22876dcc4d01944ff2462 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hMx 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hMx 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hMx 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3a9bc6eaa0fa3b6f6894eaa54a22100c 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9YA 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3a9bc6eaa0fa3b6f6894eaa54a22100c 0 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3a9bc6eaa0fa3b6f6894eaa54a22100c 0 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3a9bc6eaa0fa3b6f6894eaa54a22100c 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9YA 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9YA 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.9YA 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fbb71a84b66206623cc0eff9183299ca025aa26de2b930164150d4ae0c9fa3f0 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.joR 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fbb71a84b66206623cc0eff9183299ca025aa26de2b930164150d4ae0c9fa3f0 3 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fbb71a84b66206623cc0eff9183299ca025aa26de2b930164150d4ae0c9fa3f0 3 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fbb71a84b66206623cc0eff9183299ca025aa26de2b930164150d4ae0c9fa3f0 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:46.130 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.joR 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.joR 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.joR 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1219482 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1219482 ']' 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.388 18:50:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.388 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Goi 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.BEA ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BEA 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.G1H 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.N2U ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N2U 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.59j 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZLy ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZLy 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hMx 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.9YA ]] 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.9YA 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.389 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.joR 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:46.656 18:50:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:49.190 Waiting for block devices as requested 00:24:49.190 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:49.190 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:49.190 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:49.190 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:49.190 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:49.448 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:49.448 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:49.448 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:49.448 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:49.706 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:49.706 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:49.706 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:49.706 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:49.963 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:49.963 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:49.963 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:50.220 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:50.786 No valid GPT data, bailing 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:50.786 00:24:50.786 Discovery Log Number of Records 2, Generation counter 2 00:24:50.786 =====Discovery Log Entry 0====== 00:24:50.786 trtype: tcp 00:24:50.786 adrfam: ipv4 00:24:50.786 subtype: current discovery subsystem 00:24:50.786 treq: not specified, sq flow control disable supported 00:24:50.786 portid: 1 00:24:50.786 trsvcid: 4420 00:24:50.786 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:50.786 traddr: 10.0.0.1 00:24:50.786 eflags: none 00:24:50.786 sectype: none 00:24:50.786 =====Discovery Log Entry 1====== 00:24:50.786 trtype: tcp 00:24:50.786 adrfam: ipv4 00:24:50.786 subtype: nvme subsystem 00:24:50.786 treq: not specified, sq flow control disable supported 00:24:50.786 portid: 1 00:24:50.786 trsvcid: 4420 00:24:50.786 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:50.786 traddr: 10.0.0.1 00:24:50.786 eflags: none 00:24:50.786 sectype: none 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.786 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.044 nvme0n1 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.044 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.302 nvme0n1 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.302 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.303 18:50:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.561 nvme0n1 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.561 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.562 nvme0n1 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.562 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.819 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.819 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.819 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.819 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.819 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.819 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.819 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.820 nvme0n1 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.820 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.078 nvme0n1 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.078 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.336 nvme0n1 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.336 18:50:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.595 nvme0n1 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.595 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.854 nvme0n1 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.854 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.113 nvme0n1 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.113 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.114 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.373 nvme0n1 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.373 18:50:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.639 nvme0n1 00:24:53.639 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.639 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.640 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.902 nvme0n1 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.902 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.160 nvme0n1 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.160 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.161 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.418 18:50:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 nvme0n1 00:24:54.418 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.418 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.418 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.418 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.418 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.676 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.677 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.934 nvme0n1 00:24:54.934 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.935 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.192 nvme0n1 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.192 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.193 18:50:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 nvme0n1 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.757 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.014 nvme0n1 00:24:56.014 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.014 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.014 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.014 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.014 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:56.271 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.272 18:50:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.529 nvme0n1 00:24:56.529 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.529 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.530 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.094 nvme0n1 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.094 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.095 18:50:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.660 nvme0n1 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.660 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.226 nvme0n1 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.226 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.227 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.227 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.227 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.227 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.227 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.227 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.483 18:50:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.045 nvme0n1 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.045 18:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.607 nvme0n1 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.607 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.176 nvme0n1 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.176 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.455 nvme0n1 00:25:00.455 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.455 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.455 18:50:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.455 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.455 18:50:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.455 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.456 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.724 nvme0n1 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.724 nvme0n1 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.724 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.982 nvme0n1 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.982 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.240 nvme0n1 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.240 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.241 18:50:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.499 nvme0n1 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.499 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.758 nvme0n1 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.758 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.016 nvme0n1 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.016 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.275 nvme0n1 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.275 18:50:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.579 nvme0n1 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.579 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.837 nvme0n1 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.837 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.094 nvme0n1 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.094 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.351 18:50:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.608 nvme0n1 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.608 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.867 nvme0n1 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.867 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.125 nvme0n1 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.125 18:50:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.691 nvme0n1 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.691 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.257 nvme0n1 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.257 18:50:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.517 nvme0n1 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.517 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.083 nvme0n1 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.083 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.084 18:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.084 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.084 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.084 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.341 nvme0n1 00:25:06.341 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.341 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.341 18:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.341 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.341 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.341 18:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:06.341 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.342 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.599 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.163 nvme0n1 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.164 18:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 nvme0n1 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.730 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.297 nvme0n1 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.297 18:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.864 nvme0n1 00:25:08.864 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.864 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.864 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.864 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.864 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.864 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.122 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.123 18:50:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.689 nvme0n1 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.690 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.948 nvme0n1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.948 nvme0n1 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.948 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:10.206 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.207 nvme0n1 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.207 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.466 18:50:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 nvme0n1 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.466 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.725 nvme0n1 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.725 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.983 nvme0n1 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.984 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.242 nvme0n1 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.242 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.500 nvme0n1 00:25:11.500 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.500 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.500 18:50:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.500 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.500 18:50:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.500 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.501 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.759 nvme0n1 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.759 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.017 nvme0n1 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.017 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 nvme0n1 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.276 18:50:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 nvme0n1 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.535 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.794 nvme0n1 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.794 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.051 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.052 nvme0n1 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.052 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.310 18:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.568 nvme0n1 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.568 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.569 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.827 nvme0n1 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.827 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.093 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.094 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.094 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.094 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.094 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.094 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.094 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.094 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.425 nvme0n1 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.425 18:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.688 nvme0n1 00:25:14.688 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.688 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.688 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.688 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.688 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.688 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.946 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.204 nvme0n1 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.204 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.205 18:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.771 nvme0n1 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2U2NDc3NTZjMDRjOTk3NzFiZjFmNWUwNmZmN2Y3NjI0IOWn: 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDRhMjE2YzFiYTg3NDJhOGU5ZDNmZTFkNDY3OGVmNDlkMTc4MDFhYmRlZGI2MGE2NzEwZTliZWRiYzdhYTc3MRaAcUY=: 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.771 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.338 nvme0n1 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.338 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.339 18:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.904 nvme0n1 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjA5NDRjZTgyN2U0ZTQ4MjdmMDZiOTA1ZTAxYzRhMjfgbSPK: 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: ]] 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGM0YjUyZjM0YWE4ZjI4NGRkNWJhNjhiNzA3MGU1OGJBr1KY: 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.904 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.162 18:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.163 18:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.163 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.163 18:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.729 nvme0n1 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.729 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmIyNjhjNmNkOTQwYTAyMWEwYWIwMDY2MmVkMjI4NzZkY2M0ZDAxOTQ0ZmYyNDYypKeGeA==: 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: ]] 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2E5YmM2ZWFhMGZhM2I2ZjY4OTRlYWE1NGEyMjEwMGP+fXaZ: 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.730 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.296 nvme0n1 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:18.296 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJiNzFhODRiNjYyMDY2MjNjYzBlZmY5MTgzMjk5Y2EwMjVhYTI2ZGUyYjkzMDE2NDE1MGQ0YWUwYzlmYTNmMKt+rL4=: 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.297 18:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.864 nvme0n1 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzkwOTRhOTQ0NDRiMGQzYmZkNGFkZjk4MDA1MjFkMjM5MzM4NDc1NjM2NzQzYmZkiSIWRg==: 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI2ZGM3YWUzOGFmMmE2NGYyMzJkNmUyMGRmMWZmMTUwZThkYjk5MzVmMzdmODEwrUWQkA==: 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.864 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.123 request: 00:25:19.123 { 00:25:19.123 "name": "nvme0", 00:25:19.123 "trtype": "tcp", 00:25:19.123 "traddr": "10.0.0.1", 00:25:19.123 "adrfam": "ipv4", 00:25:19.123 "trsvcid": "4420", 00:25:19.123 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:19.123 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:19.123 "prchk_reftag": false, 00:25:19.123 "prchk_guard": false, 00:25:19.123 "hdgst": false, 00:25:19.123 "ddgst": false, 00:25:19.123 "method": "bdev_nvme_attach_controller", 00:25:19.123 "req_id": 1 00:25:19.123 } 00:25:19.123 Got JSON-RPC error response 00:25:19.123 response: 00:25:19.123 { 00:25:19.123 "code": -5, 00:25:19.123 "message": "Input/output error" 00:25:19.123 } 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.123 request: 00:25:19.123 { 00:25:19.123 "name": "nvme0", 00:25:19.123 "trtype": "tcp", 00:25:19.123 "traddr": "10.0.0.1", 00:25:19.123 "adrfam": "ipv4", 00:25:19.123 "trsvcid": "4420", 00:25:19.123 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:19.123 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:19.123 "prchk_reftag": false, 00:25:19.123 "prchk_guard": false, 00:25:19.123 "hdgst": false, 00:25:19.123 "ddgst": false, 00:25:19.123 "dhchap_key": "key2", 00:25:19.123 "method": "bdev_nvme_attach_controller", 00:25:19.123 "req_id": 1 00:25:19.123 } 00:25:19.123 Got JSON-RPC error response 00:25:19.123 response: 00:25:19.123 { 00:25:19.123 "code": -5, 00:25:19.123 "message": "Input/output error" 00:25:19.123 } 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.123 request: 00:25:19.123 { 00:25:19.123 "name": "nvme0", 00:25:19.123 "trtype": "tcp", 00:25:19.123 "traddr": "10.0.0.1", 00:25:19.123 "adrfam": "ipv4", 00:25:19.123 "trsvcid": "4420", 00:25:19.123 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:19.123 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:19.123 "prchk_reftag": false, 00:25:19.123 "prchk_guard": false, 00:25:19.123 "hdgst": false, 00:25:19.123 "ddgst": false, 00:25:19.123 "dhchap_key": "key1", 00:25:19.123 "dhchap_ctrlr_key": "ckey2", 00:25:19.123 "method": "bdev_nvme_attach_controller", 00:25:19.123 "req_id": 1 00:25:19.123 } 00:25:19.123 Got JSON-RPC error response 00:25:19.123 response: 00:25:19.123 { 00:25:19.123 "code": -5, 00:25:19.123 "message": "Input/output error" 00:25:19.123 } 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.123 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.123 rmmod nvme_tcp 00:25:19.382 rmmod nvme_fabrics 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1219482 ']' 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1219482 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1219482 ']' 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1219482 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1219482 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1219482' 00:25:19.382 killing process with pid 1219482 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1219482 00:25:19.382 18:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1219482 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.382 18:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:21.918 18:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:23.820 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:23.820 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:23.820 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:23.820 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:23.820 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:23.820 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:24.079 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:25.017 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:25.018 18:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Goi /tmp/spdk.key-null.G1H /tmp/spdk.key-sha256.59j /tmp/spdk.key-sha384.hMx /tmp/spdk.key-sha512.joR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:25.018 18:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:27.543 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:27.543 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:27.543 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:27.543 00:25:27.543 real 0m47.957s 00:25:27.543 user 0m42.623s 00:25:27.543 sys 0m11.069s 00:25:27.543 18:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.543 18:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.543 ************************************ 00:25:27.543 END TEST nvmf_auth_host 00:25:27.543 ************************************ 00:25:27.543 18:50:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:27.543 18:50:43 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:27.543 18:50:43 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:27.543 18:50:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:27.543 18:50:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.543 18:50:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.543 ************************************ 00:25:27.543 START TEST nvmf_digest 00:25:27.543 ************************************ 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:27.543 * Looking for test storage... 00:25:27.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.543 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.544 18:50:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.807 18:50:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:32.807 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.807 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:32.808 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:32.808 Found net devices under 0000:86:00.0: cvl_0_0 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:32.808 Found net devices under 0000:86:00.1: cvl_0_1 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:25:32.808 00:25:32.808 --- 10.0.0.2 ping statistics --- 00:25:32.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.808 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:25:32.808 00:25:32.808 --- 10.0.0.1 ping statistics --- 00:25:32.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.808 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:32.808 ************************************ 00:25:32.808 START TEST nvmf_digest_clean 00:25:32.808 ************************************ 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1232519 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1232519 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1232519 ']' 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.808 18:50:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.808 [2024-07-15 18:50:49.363599] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:32.808 [2024-07-15 18:50:49.363643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.808 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.808 [2024-07-15 18:50:49.421583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.808 [2024-07-15 18:50:49.500113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.808 [2024-07-15 18:50:49.500147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.808 [2024-07-15 18:50:49.500154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.808 [2024-07-15 18:50:49.500163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.808 [2024-07-15 18:50:49.500168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.808 [2024-07-15 18:50:49.500203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.744 null0 00:25:33.744 [2024-07-15 18:50:50.287497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.744 [2024-07-15 18:50:50.311675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1232576 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1232576 /var/tmp/bperf.sock 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1232576 ']' 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.744 18:50:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:33.744 [2024-07-15 18:50:50.364100] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:33.744 [2024-07-15 18:50:50.364138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232576 ] 00:25:33.744 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.744 [2024-07-15 18:50:50.418652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.003 [2024-07-15 18:50:50.499480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.594 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.594 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:34.594 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:34.594 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:34.594 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:34.859 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.859 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.117 nvme0n1 00:25:35.118 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:35.118 18:50:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:35.118 Running I/O for 2 seconds... 00:25:37.648 00:25:37.648 Latency(us) 00:25:37.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.648 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:37.648 nvme0n1 : 2.00 26562.82 103.76 0.00 0.00 4813.73 1966.08 16184.54 00:25:37.648 =================================================================================================================== 00:25:37.648 Total : 26562.82 103.76 0.00 0.00 4813.73 1966.08 16184.54 00:25:37.648 0 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:37.648 | select(.opcode=="crc32c") 00:25:37.648 | "\(.module_name) \(.executed)"' 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1232576 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1232576 ']' 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1232576 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:37.648 18:50:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1232576 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1232576' 00:25:37.648 killing process with pid 1232576 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1232576 00:25:37.648 Received shutdown signal, test time was about 2.000000 seconds 00:25:37.648 00:25:37.648 Latency(us) 00:25:37.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.648 =================================================================================================================== 00:25:37.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1232576 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1233254 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1233254 /var/tmp/bperf.sock 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1233254 ']' 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.648 18:50:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:37.648 [2024-07-15 18:50:54.233862] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:37.648 [2024-07-15 18:50:54.233912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233254 ] 00:25:37.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:37.648 Zero copy mechanism will not be used. 00:25:37.648 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.648 [2024-07-15 18:50:54.289341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.906 [2024-07-15 18:50:54.357963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.472 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:38.472 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:38.472 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:38.472 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:38.472 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:38.731 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.731 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.989 nvme0n1 00:25:38.989 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:38.989 18:50:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:38.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.989 Zero copy mechanism will not be used. 00:25:38.989 Running I/O for 2 seconds... 00:25:40.890 00:25:40.890 Latency(us) 00:25:40.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.890 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:40.890 nvme0n1 : 2.00 5137.31 642.16 0.00 0.00 3111.81 918.93 5670.29 00:25:40.890 =================================================================================================================== 00:25:40.890 Total : 5137.31 642.16 0.00 0.00 3111.81 918.93 5670.29 00:25:40.890 0 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:41.148 | select(.opcode=="crc32c") 00:25:41.148 | "\(.module_name) \(.executed)"' 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1233254 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1233254 ']' 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1233254 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1233254 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1233254' 00:25:41.148 killing process with pid 1233254 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1233254 00:25:41.148 Received shutdown signal, test time was about 2.000000 seconds 00:25:41.148 00:25:41.148 Latency(us) 00:25:41.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.148 =================================================================================================================== 00:25:41.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.148 18:50:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1233254 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1233944 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1233944 /var/tmp/bperf.sock 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1233944 ']' 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.407 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.407 [2024-07-15 18:50:58.058055] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:41.407 [2024-07-15 18:50:58.058103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233944 ] 00:25:41.407 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.407 [2024-07-15 18:50:58.111663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.665 [2024-07-15 18:50:58.184150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.231 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.231 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:42.231 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:42.231 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:42.231 18:50:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:42.489 18:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.489 18:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.747 nvme0n1 00:25:42.747 18:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:42.747 18:50:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:43.005 Running I/O for 2 seconds... 00:25:44.913 00:25:44.913 Latency(us) 00:25:44.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.913 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:44.913 nvme0n1 : 2.00 28277.39 110.46 0.00 0.00 4521.38 2179.78 8263.23 00:25:44.913 =================================================================================================================== 00:25:44.913 Total : 28277.39 110.46 0.00 0.00 4521.38 2179.78 8263.23 00:25:44.913 0 00:25:44.913 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:44.913 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:44.913 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:44.913 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:44.913 | select(.opcode=="crc32c") 00:25:44.913 | "\(.module_name) \(.executed)"' 00:25:44.913 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1233944 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1233944 ']' 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1233944 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1233944 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1233944' 00:25:45.172 killing process with pid 1233944 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1233944 00:25:45.172 Received shutdown signal, test time was about 2.000000 seconds 00:25:45.172 00:25:45.172 Latency(us) 00:25:45.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.172 =================================================================================================================== 00:25:45.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.172 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1233944 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1234675 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1234675 /var/tmp/bperf.sock 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1234675 ']' 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:45.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.431 18:51:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.431 [2024-07-15 18:51:02.022197] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:45.431 [2024-07-15 18:51:02.022269] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234675 ] 00:25:45.431 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.431 Zero copy mechanism will not be used. 00:25:45.431 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.431 [2024-07-15 18:51:02.076891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.688 [2024-07-15 18:51:02.156800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.254 18:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.255 18:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:46.255 18:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:46.255 18:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:46.255 18:51:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:46.512 18:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.512 18:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.770 nvme0n1 00:25:46.770 18:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:46.770 18:51:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.028 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.028 Zero copy mechanism will not be used. 00:25:47.028 Running I/O for 2 seconds... 00:25:48.929 00:25:48.929 Latency(us) 00:25:48.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.929 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:48.929 nvme0n1 : 2.00 5846.09 730.76 0.00 0.00 2732.65 1787.99 12993.22 00:25:48.929 =================================================================================================================== 00:25:48.929 Total : 5846.09 730.76 0.00 0.00 2732.65 1787.99 12993.22 00:25:48.929 0 00:25:48.929 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:48.929 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:48.929 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:48.929 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:48.929 | select(.opcode=="crc32c") 00:25:48.929 | "\(.module_name) \(.executed)"' 00:25:48.929 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1234675 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1234675 ']' 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1234675 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1234675 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1234675' 00:25:49.188 killing process with pid 1234675 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1234675 00:25:49.188 Received shutdown signal, test time was about 2.000000 seconds 00:25:49.188 00:25:49.188 Latency(us) 00:25:49.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.188 =================================================================================================================== 00:25:49.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.188 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1234675 00:25:49.447 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1232519 00:25:49.447 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1232519 ']' 00:25:49.447 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1232519 00:25:49.447 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:49.447 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.447 18:51:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1232519 00:25:49.447 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.447 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.447 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1232519' 00:25:49.447 killing process with pid 1232519 00:25:49.447 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1232519 00:25:49.447 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1232519 00:25:49.707 00:25:49.707 real 0m16.880s 00:25:49.707 user 0m32.428s 00:25:49.707 sys 0m4.402s 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:49.707 ************************************ 00:25:49.707 END TEST nvmf_digest_clean 00:25:49.707 ************************************ 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:49.707 ************************************ 00:25:49.707 START TEST nvmf_digest_error 00:25:49.707 ************************************ 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1235400 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1235400 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1235400 ']' 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.707 18:51:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.707 [2024-07-15 18:51:06.317019] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:49.707 [2024-07-15 18:51:06.317058] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.707 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.707 [2024-07-15 18:51:06.373356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.966 [2024-07-15 18:51:06.447094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.966 [2024-07-15 18:51:06.447136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.966 [2024-07-15 18:51:06.447143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.966 [2024-07-15 18:51:06.447149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.966 [2024-07-15 18:51:06.447153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.966 [2024-07-15 18:51:06.447193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.537 [2024-07-15 18:51:07.157439] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.537 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.796 null0 00:25:50.796 [2024-07-15 18:51:07.250334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.796 [2024-07-15 18:51:07.274510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1235524 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1235524 /var/tmp/bperf.sock 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1235524 ']' 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.796 18:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.796 [2024-07-15 18:51:07.326178] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:50.796 [2024-07-15 18:51:07.326218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235524 ] 00:25:50.796 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.796 [2024-07-15 18:51:07.379397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.796 [2024-07-15 18:51:07.458696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.795 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.054 nvme0n1 00:25:52.054 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:52.054 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.054 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.054 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.054 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:52.054 18:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.313 Running I/O for 2 seconds... 00:25:52.313 [2024-07-15 18:51:08.858503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.858536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.858546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.868736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.868761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.868770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.879470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.879491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.879499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.888392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.888415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.888423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.897998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.898021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.898034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.908322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.908345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.908354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.918206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.918234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.918243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.927871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.927892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.927901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.936407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.936428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.936436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.946094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.946116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.946124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.955354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.955374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.955382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.965696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.965717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.965725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.975402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.975422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.975430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.983747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.983770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.313 [2024-07-15 18:51:08.983778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.313 [2024-07-15 18:51:08.993804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.313 [2024-07-15 18:51:08.993825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.314 [2024-07-15 18:51:08.993832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.314 [2024-07-15 18:51:09.002165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.314 [2024-07-15 18:51:09.002184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.314 [2024-07-15 18:51:09.002192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.314 [2024-07-15 18:51:09.013469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.314 [2024-07-15 18:51:09.013489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.314 [2024-07-15 18:51:09.013497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.023751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.023772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.023780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.034302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.034323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.034331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.043353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.043374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.043382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.054046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.054067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.054075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.063073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.063096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.063104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.073820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.073841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.073851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.084003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.084025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.084033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.092560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.092581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.092589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.103036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.103057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.103065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.113147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.113168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.113177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.121135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.121156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.121164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.131454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.131473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.131482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.141712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.141733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.141741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.151578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.151599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.151611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.160001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.160022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.160031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.170885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.170905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-07-15 18:51:09.170914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-07-15 18:51:09.181604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.584 [2024-07-15 18:51:09.181625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.181634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.190534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.190554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.190563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.202432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.202453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.202461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.213878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.213898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.213906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.222197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.222216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.222231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.233337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.233357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.233364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.242533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.242554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.242562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.251831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.251851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.251859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.261301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.261323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.261330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.270737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.270758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.270766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.585 [2024-07-15 18:51:09.280615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.585 [2024-07-15 18:51:09.280636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-07-15 18:51:09.280645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.845 [2024-07-15 18:51:09.291223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.845 [2024-07-15 18:51:09.291250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.845 [2024-07-15 18:51:09.291258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.845 [2024-07-15 18:51:09.299196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.845 [2024-07-15 18:51:09.299217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.845 [2024-07-15 18:51:09.299230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.845 [2024-07-15 18:51:09.310253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.845 [2024-07-15 18:51:09.310273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.845 [2024-07-15 18:51:09.310281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.845 [2024-07-15 18:51:09.319997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.845 [2024-07-15 18:51:09.320017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.845 [2024-07-15 18:51:09.320031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.845 [2024-07-15 18:51:09.329341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.845 [2024-07-15 18:51:09.329360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.845 [2024-07-15 18:51:09.329368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.845 [2024-07-15 18:51:09.338849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.338869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.338877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.348952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.348972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.348980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.358586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.358605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.358613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.367034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.367054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.367061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.377727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.377746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.377754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.386976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.386996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.387004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.397889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.397909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.397917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.407470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.407494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.407503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.416827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.416847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.416855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.426377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.426397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.426405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.436754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.436775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.436784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.447716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.447738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.447746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.455580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.455600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.455608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.466598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.466618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.466627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.477269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.477290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.477299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.486439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.486460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.486468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.496913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.496933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.496941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.508569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.508590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.508598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.517105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.517125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.517133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.527055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.527076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.527084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.536990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.537010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.537018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.846 [2024-07-15 18:51:09.546313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:52.846 [2024-07-15 18:51:09.546333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.846 [2024-07-15 18:51:09.546341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.106 [2024-07-15 18:51:09.555982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.106 [2024-07-15 18:51:09.556002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.106 [2024-07-15 18:51:09.556010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.106 [2024-07-15 18:51:09.565021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.106 [2024-07-15 18:51:09.565041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.106 [2024-07-15 18:51:09.565048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.106 [2024-07-15 18:51:09.574676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.106 [2024-07-15 18:51:09.574696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.106 [2024-07-15 18:51:09.574707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.106 [2024-07-15 18:51:09.583994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.106 [2024-07-15 18:51:09.584015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.106 [2024-07-15 18:51:09.584023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.106 [2024-07-15 18:51:09.593587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.106 [2024-07-15 18:51:09.593607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.106 [2024-07-15 18:51:09.593615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.106 [2024-07-15 18:51:09.602400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.106 [2024-07-15 18:51:09.602419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.106 [2024-07-15 18:51:09.602427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.106 [2024-07-15 18:51:09.613147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.106 [2024-07-15 18:51:09.613167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.613175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.622412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.622432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.622440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.633392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.633412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.633420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.642025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.642045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.642053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.652992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.653012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.653020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.662780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.662803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.662811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.672208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.672233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.672242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.682281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.682301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.682309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.690549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.690569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.690578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.701384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.701405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.701413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.711342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.711361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.711369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.719380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.719399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.719407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.731522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.731543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.731551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.739594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.739615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.739623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.750775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.750795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.750803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.760447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.760468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.760476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.769494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.769514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.769521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.778268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.778288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.778296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.788653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.788673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.788680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.798102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.798122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.798130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.107 [2024-07-15 18:51:09.807629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.107 [2024-07-15 18:51:09.807649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.107 [2024-07-15 18:51:09.807657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.817380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.817400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.817408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.825914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.825933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.825946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.836616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.836637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.836644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.845624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.845644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.845652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.854680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.854700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.854708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.863507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.863527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.863535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.873827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.873847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.873855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.884092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.884111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.884120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.893424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.893442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.893451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.903921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.903941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.903949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.913925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.913944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.913952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.922521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.922540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.922548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.932221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.932246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.932254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.941742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.941762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.941770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.951029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.951050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.951058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.960844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.960863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.960871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.970233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.970253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.970261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.978715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.978736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.978744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.988697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.988717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.988727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.367 [2024-07-15 18:51:09.998058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.367 [2024-07-15 18:51:09.998078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.367 [2024-07-15 18:51:09.998086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.368 [2024-07-15 18:51:10.008083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.368 [2024-07-15 18:51:10.008104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.368 [2024-07-15 18:51:10.008113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.368 [2024-07-15 18:51:10.016954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.368 [2024-07-15 18:51:10.016975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.368 [2024-07-15 18:51:10.016983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.368 [2024-07-15 18:51:10.028110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.368 [2024-07-15 18:51:10.028130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.368 [2024-07-15 18:51:10.028138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.368 [2024-07-15 18:51:10.039552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.368 [2024-07-15 18:51:10.039573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.368 [2024-07-15 18:51:10.039592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.368 [2024-07-15 18:51:10.051092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.368 [2024-07-15 18:51:10.051112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.368 [2024-07-15 18:51:10.051120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.368 [2024-07-15 18:51:10.063179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.368 [2024-07-15 18:51:10.063200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.368 [2024-07-15 18:51:10.063208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.074151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.074172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.074180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.085191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.085214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.085222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.098655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.098676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.098685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.108743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.108765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.108773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.120436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.120456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.120465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.128846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.128867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.128876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.139423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.139444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.139453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.148725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.148746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.148754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.160425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.160444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.160452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.169352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.169373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.169380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.180117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.180136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.180144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.188745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.188764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.188772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.198163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.198183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.198191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.207956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.207975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.207984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.217020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.217040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.217048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.226924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.226944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.226952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.236896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.236915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.236923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.245800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.245821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.245829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.255302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.255324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.265387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.265408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.265416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.275250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.275270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.275278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.283783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.283803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.283811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.294574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.294595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.294603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.303045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.303067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.303074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.313520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.313541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.313550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.628 [2024-07-15 18:51:10.323176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.628 [2024-07-15 18:51:10.323197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.628 [2024-07-15 18:51:10.323205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.629 [2024-07-15 18:51:10.332601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.629 [2024-07-15 18:51:10.332623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.629 [2024-07-15 18:51:10.332631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.888 [2024-07-15 18:51:10.342068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.888 [2024-07-15 18:51:10.342094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.888 [2024-07-15 18:51:10.342102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.888 [2024-07-15 18:51:10.351798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.888 [2024-07-15 18:51:10.351819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.351827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.361312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.361333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.361341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.371098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.371118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.371126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.380297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.380317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.380325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.389618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.389639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.389647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.400583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.400604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.409742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.409762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.409769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.418630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.418650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.418657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.429271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.429291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.429299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.438397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.438417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.438425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.448026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.448047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.448055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.456727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.456747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.456754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.466483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.466503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.466511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.477574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.477594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.477602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.486111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.486131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.486139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.495858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.495878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.495886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.505493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.505513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.505524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.515439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.515459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.515467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.524669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.524689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.524697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.532654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.532674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.532682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.542480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.542502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.542511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.552840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.552861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.552869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.562219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.562245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.562253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.570610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.570631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.570639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.581061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.581081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.581089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.889 [2024-07-15 18:51:10.592172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:53.889 [2024-07-15 18:51:10.592194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.889 [2024-07-15 18:51:10.592202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.149 [2024-07-15 18:51:10.603294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.149 [2024-07-15 18:51:10.603315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.149 [2024-07-15 18:51:10.603323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.149 [2024-07-15 18:51:10.611905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.149 [2024-07-15 18:51:10.611925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.149 [2024-07-15 18:51:10.611933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.149 [2024-07-15 18:51:10.622353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.149 [2024-07-15 18:51:10.622374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.149 [2024-07-15 18:51:10.622382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.149 [2024-07-15 18:51:10.632784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.149 [2024-07-15 18:51:10.632805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.149 [2024-07-15 18:51:10.632813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.149 [2024-07-15 18:51:10.645398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.149 [2024-07-15 18:51:10.645420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.149 [2024-07-15 18:51:10.645428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.149 [2024-07-15 18:51:10.653393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.149 [2024-07-15 18:51:10.653413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.149 [2024-07-15 18:51:10.653421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.149 [2024-07-15 18:51:10.665205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.665232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.665241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.675326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.675358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.683380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.683400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.683408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.695236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.695257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.695265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.703847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.703868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.703876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.715341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.715362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.715370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.726283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.726304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.726312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.735181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.735202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.735210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.745788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.745809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.745817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.753783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.753803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.753811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.764495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.764519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.764527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.774322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.774342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.774350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.782238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.782257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.782265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.792358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.792378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.792386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.802047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.802067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.802075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.811809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.811828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.811836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.821548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.821568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.821576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.831306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.831326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.831334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 [2024-07-15 18:51:10.839635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa38f20) 00:25:54.150 [2024-07-15 18:51:10.839654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.150 [2024-07-15 18:51:10.839662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.150 00:25:54.150 Latency(us) 00:25:54.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.150 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:54.150 nvme0n1 : 2.00 25945.20 101.35 0.00 0.00 4928.48 2521.71 14417.92 00:25:54.150 =================================================================================================================== 00:25:54.150 Total : 25945.20 101.35 0.00 0.00 4928.48 2521.71 14417.92 00:25:54.150 0 00:25:54.410 18:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:54.410 18:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:54.410 18:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:54.410 | .driver_specific 00:25:54.410 | .nvme_error 00:25:54.410 | .status_code 00:25:54.410 | .command_transient_transport_error' 00:25:54.410 18:51:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 203 > 0 )) 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1235524 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1235524 ']' 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1235524 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235524 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235524' 00:25:54.410 killing process with pid 1235524 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1235524 00:25:54.410 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.410 00:25:54.410 Latency(us) 00:25:54.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.410 =================================================================================================================== 00:25:54.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.410 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1235524 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1236614 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1236614 /var/tmp/bperf.sock 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1236614 ']' 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.670 18:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.670 [2024-07-15 18:51:11.317547] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:54.670 [2024-07-15 18:51:11.317598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236614 ] 00:25:54.670 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:54.670 Zero copy mechanism will not be used. 00:25:54.670 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.670 [2024-07-15 18:51:11.372946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.929 [2024-07-15 18:51:11.446127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.497 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.497 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:55.497 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.497 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.756 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:55.756 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.756 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.756 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.756 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.756 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.016 nvme0n1 00:25:56.016 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:56.016 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.016 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.016 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.016 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:56.016 18:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.016 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.016 Zero copy mechanism will not be used. 00:25:56.016 Running I/O for 2 seconds... 00:25:56.016 [2024-07-15 18:51:12.697615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.016 [2024-07-15 18:51:12.697653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.016 [2024-07-15 18:51:12.697664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.016 [2024-07-15 18:51:12.707369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.016 [2024-07-15 18:51:12.707394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.016 [2024-07-15 18:51:12.707403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.016 [2024-07-15 18:51:12.716966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.016 [2024-07-15 18:51:12.716989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.016 [2024-07-15 18:51:12.716997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.726663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.726686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.726694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.735919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.735941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.735950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.746257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.746279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.746287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.756561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.756584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.765725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.765746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.765754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.775558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.775580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.775588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.785099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.785125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.785133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.796065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.796086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.796094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.804995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.805016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.805025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.814124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.814147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.814155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.822544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.822563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.822571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.830645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.830666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.830675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.839669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.839691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.839699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.276 [2024-07-15 18:51:12.848447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.276 [2024-07-15 18:51:12.848467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.276 [2024-07-15 18:51:12.848475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.855663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.855682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.855691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.863004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.863024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.863032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.872286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.872306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.872314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.881830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.881850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.881858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.891770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.891790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.891798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.901610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.901631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.901639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.910468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.910489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.910498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.921253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.921273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.921281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.931830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.931851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.931859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.941101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.941122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.941135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.951568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.951590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.951598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.961659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.961680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.961688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.971985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.972006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.972014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.277 [2024-07-15 18:51:12.982229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.277 [2024-07-15 18:51:12.982250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.277 [2024-07-15 18:51:12.982258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.537 [2024-07-15 18:51:12.991005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.537 [2024-07-15 18:51:12.991025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.537 [2024-07-15 18:51:12.991033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.537 [2024-07-15 18:51:13.000516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.537 [2024-07-15 18:51:13.000536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.537 [2024-07-15 18:51:13.000544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.537 [2024-07-15 18:51:13.008054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.537 [2024-07-15 18:51:13.008075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.537 [2024-07-15 18:51:13.008083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.537 [2024-07-15 18:51:13.017480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.537 [2024-07-15 18:51:13.017501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.537 [2024-07-15 18:51:13.017509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.537 [2024-07-15 18:51:13.026486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.537 [2024-07-15 18:51:13.026511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.537 [2024-07-15 18:51:13.026519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.537 [2024-07-15 18:51:13.035604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.537 [2024-07-15 18:51:13.035624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.537 [2024-07-15 18:51:13.035633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.537 [2024-07-15 18:51:13.043614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.043635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.043643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.052690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.052711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.052718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.063063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.063083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.063091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.072184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.072205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.072213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.082689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.082709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.082717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.092248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.092269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.092277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.102290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.102311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.102319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.111569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.111589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.111597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.121956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.121977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.121985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.130948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.130969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.130977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.140446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.140466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.140473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.148925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.148949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.148957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.159481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.159504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.159512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.169257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.169279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.169287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.178756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.178779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.178787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.189024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.189049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.189058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.198071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.198094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.198102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.207540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.207562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.207571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.217633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.217655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.217664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.227401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.227423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.227432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.538 [2024-07-15 18:51:13.237432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.538 [2024-07-15 18:51:13.237453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.538 [2024-07-15 18:51:13.237461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.796 [2024-07-15 18:51:13.247381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.796 [2024-07-15 18:51:13.247404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.796 [2024-07-15 18:51:13.247412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.796 [2024-07-15 18:51:13.256638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.796 [2024-07-15 18:51:13.256659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.796 [2024-07-15 18:51:13.256667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.796 [2024-07-15 18:51:13.265703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.796 [2024-07-15 18:51:13.265724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.796 [2024-07-15 18:51:13.265732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.796 [2024-07-15 18:51:13.274775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.796 [2024-07-15 18:51:13.274796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.274804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.284094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.284115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.284123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.293719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.293740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.293748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.303059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.303080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.303088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.312190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.312210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.312218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.321533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.321554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.321562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.331082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.331103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.331110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.340782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.340803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.340810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.351198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.351219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.351239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.359089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.359109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.359117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.366314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.366334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.366342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.374960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.374981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.374988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.380463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.380484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.380491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.387599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.387620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.387627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.395375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.395395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.395403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.403940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.403960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.403968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.412580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.412601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.412609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.421113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.421137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.421145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.429793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.429815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.429822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.438700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.438721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.438729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.447164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.447185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.447193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.455607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.455629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.455638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.464193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.464213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.464221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.472889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.472911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.472919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.481266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.481287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.481294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.489871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.489893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.489904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.797 [2024-07-15 18:51:13.498159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:56.797 [2024-07-15 18:51:13.498180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.797 [2024-07-15 18:51:13.498188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.055 [2024-07-15 18:51:13.505849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.055 [2024-07-15 18:51:13.505872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.055 [2024-07-15 18:51:13.505881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.055 [2024-07-15 18:51:13.514442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.055 [2024-07-15 18:51:13.514464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.514472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.522982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.523003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.523011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.531903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.531925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.531934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.540253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.540275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.540283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.548533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.548555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.548563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.557129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.557150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.557158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.565235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.565262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.565271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.574170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.574192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.574201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.580861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.580883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.580892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.588773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.588795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.588803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.596394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.596416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.596426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.603938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.603960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.603969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.611704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.611726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.611735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.619148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.619170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.619179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.626807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.626829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.626838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.634801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.634822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.634830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.642591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.642613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.642621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.650166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.650188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.650195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.657753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.657774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.657782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.665905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.665926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.665934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.673475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.673496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.673503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.681424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.681446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.681454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.688538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.688560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.688568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.697656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.697678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.697690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.706236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.706258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.706266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.714725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.714746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.056 [2024-07-15 18:51:13.714755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.056 [2024-07-15 18:51:13.724139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.056 [2024-07-15 18:51:13.724160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.057 [2024-07-15 18:51:13.724168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.057 [2024-07-15 18:51:13.733424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.057 [2024-07-15 18:51:13.733446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.057 [2024-07-15 18:51:13.733454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.057 [2024-07-15 18:51:13.742480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.057 [2024-07-15 18:51:13.742502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.057 [2024-07-15 18:51:13.742511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.057 [2024-07-15 18:51:13.751239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.057 [2024-07-15 18:51:13.751261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.057 [2024-07-15 18:51:13.751270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.057 [2024-07-15 18:51:13.760033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.057 [2024-07-15 18:51:13.760055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.057 [2024-07-15 18:51:13.760063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.769179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.769201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.769209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.778173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.778200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.778208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.786770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.786792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.786800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.795841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.795862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.795870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.804618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.804640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.804648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.813206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.813233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.813242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.821344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.821365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.821373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.830056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.830079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.830086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.838678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.838700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.838708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.847848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.847870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.847878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.856578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.856600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.856608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.316 [2024-07-15 18:51:13.864691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.316 [2024-07-15 18:51:13.864712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.316 [2024-07-15 18:51:13.864720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.873051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.873072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.873080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.880870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.880892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.880900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.887591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.887613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.887620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.894964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.894985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.894993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.902284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.902304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.902312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.909431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.909452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.909460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.916389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.916409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.916420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.923490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.923511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.923519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.930439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.930460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.930468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.937555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.937576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.937583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.944660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.944681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.944690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.951810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.951831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.951839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.958861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.958881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.958889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.965871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.965891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.965899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.973142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.973163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.973170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.980295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.980315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.980322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.987338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.987358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.987365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:13.994463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:13.994483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:13.994491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:14.001515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:14.001536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:14.001544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:14.008457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:14.008477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:14.008485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.317 [2024-07-15 18:51:14.015523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.317 [2024-07-15 18:51:14.015543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.317 [2024-07-15 18:51:14.015551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.022539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.022560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.022568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.029694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.029715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.029722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.036791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.036812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.036823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.043929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.043951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.043959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.051045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.051066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.051073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.057958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.057979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.057987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.065128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.065148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.065156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.072217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.072243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.072251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.079271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.079292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.079300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.086297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.086317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.086324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.093420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.093440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.093448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.100552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.100576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.100583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.107630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.107650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.107658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.114741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.114761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.114769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.121711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.121732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.121740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.576 [2024-07-15 18:51:14.128914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.576 [2024-07-15 18:51:14.128935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.576 [2024-07-15 18:51:14.128942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.135908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.135928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.135936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.142835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.142856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.142864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.149946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.149967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.149974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.156950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.156970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.156978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.163963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.163984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.163992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.171016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.171039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.171047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.178066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.178088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.178096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.185072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.185093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.192034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.192055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.192063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.199081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.199101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.199109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.206163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.206183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.206193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.213163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.213183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.213191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.220192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.220212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.220223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.227246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.227266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.227273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.234233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.234253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.234261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.241479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.241500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.241508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.248432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.248452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.248460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.255562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.255582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.255590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.262598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.262619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.262627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.269708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.269729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.269737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.577 [2024-07-15 18:51:14.276955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.577 [2024-07-15 18:51:14.276975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.577 [2024-07-15 18:51:14.276982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.283984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.284008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.284016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.291163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.291184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.291192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.298156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.298176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.298184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.305266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.305287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.305294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.312254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.312275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.312283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.319308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.319328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.319336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.326455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.326475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.326483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.333556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.333577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.333585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.340650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.836 [2024-07-15 18:51:14.340671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.836 [2024-07-15 18:51:14.340679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.836 [2024-07-15 18:51:14.347598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.347619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.347627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.354648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.354669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.354677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.361609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.361630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.361637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.368698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.368719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.368726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.375800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.375820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.375828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.382725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.382746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.382753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.389676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.389697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.389705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.396737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.396757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.396765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.403720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.403740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.403751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.410887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.410908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.410915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.417934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.417954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.417962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.424978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.424999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.425006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.431979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.432000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.432007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.438868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.438888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.438896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.445949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.445971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.445978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.452973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.452994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.453002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.459985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.460005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.460013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.467082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.467102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.467110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.474074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.474094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.474102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.481217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.481242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.481260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.488361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.488381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.488389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.495443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.495463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.495471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.502487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.502508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.502516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.509513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.509533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.509541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.516439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.516459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.516466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.523535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.523556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.523569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.530564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.530585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.530592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.837 [2024-07-15 18:51:14.537660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:57.837 [2024-07-15 18:51:14.537680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.837 [2024-07-15 18:51:14.537687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.544783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.544803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.544811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.551941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.551961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.551968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.559021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.559041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.559049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.565928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.565948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.565956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.572999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.573020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.573028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.580177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.580198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.580206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.587141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.587165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.587173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.594312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.594332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.594340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.601339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.601360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.601367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.608386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.608406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.608414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.615479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.615500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.615508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.622516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.622537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.622545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.629598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.629619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.629627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.636653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.636674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.636682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.643746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.643768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.643776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.097 [2024-07-15 18:51:14.650942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.097 [2024-07-15 18:51:14.650964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.097 [2024-07-15 18:51:14.650971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.098 [2024-07-15 18:51:14.657968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.098 [2024-07-15 18:51:14.657990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.098 [2024-07-15 18:51:14.657998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.098 [2024-07-15 18:51:14.665179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.098 [2024-07-15 18:51:14.665200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.098 [2024-07-15 18:51:14.665208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.098 [2024-07-15 18:51:14.672236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.098 [2024-07-15 18:51:14.672257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.098 [2024-07-15 18:51:14.672265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.098 [2024-07-15 18:51:14.679412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.098 [2024-07-15 18:51:14.679433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.098 [2024-07-15 18:51:14.679441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.098 [2024-07-15 18:51:14.686620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.098 [2024-07-15 18:51:14.686641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.098 [2024-07-15 18:51:14.686649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.098 [2024-07-15 18:51:14.693450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c030b0) 00:25:58.098 [2024-07-15 18:51:14.693471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.098 [2024-07-15 18:51:14.693478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.098 00:25:58.098 Latency(us) 00:25:58.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.098 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:58.098 nvme0n1 : 2.00 3853.91 481.74 0.00 0.00 4147.65 1339.21 11112.63 00:25:58.098 =================================================================================================================== 00:25:58.098 Total : 3853.91 481.74 0.00 0.00 4147.65 1339.21 11112.63 00:25:58.098 0 00:25:58.098 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:58.098 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:58.098 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:58.098 | .driver_specific 00:25:58.098 | .nvme_error 00:25:58.098 | .status_code 00:25:58.098 | .command_transient_transport_error' 00:25:58.098 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 249 > 0 )) 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1236614 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1236614 ']' 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1236614 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1236614 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1236614' 00:25:58.357 killing process with pid 1236614 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1236614 00:25:58.357 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.357 00:25:58.357 Latency(us) 00:25:58.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.357 =================================================================================================================== 00:25:58.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.357 18:51:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1236614 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1237301 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1237301 /var/tmp/bperf.sock 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1237301 ']' 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.616 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.616 [2024-07-15 18:51:15.172201] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:25:58.616 [2024-07-15 18:51:15.172256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237301 ] 00:25:58.616 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.616 [2024-07-15 18:51:15.225399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.616 [2024-07-15 18:51:15.296686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.553 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.553 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:59.553 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.553 18:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.553 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:59.553 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.553 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.553 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.553 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.553 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.121 nvme0n1 00:26:00.121 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:00.121 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.121 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.121 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.121 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:00.121 18:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.121 Running I/O for 2 seconds... 00:26:00.121 [2024-07-15 18:51:16.702771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f81e0 00:26:00.121 [2024-07-15 18:51:16.703533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.121 [2024-07-15 18:51:16.703559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.121 [2024-07-15 18:51:16.711675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e0630 00:26:00.121 [2024-07-15 18:51:16.712415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.712436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.721955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f1430 00:26:00.122 [2024-07-15 18:51:16.722812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.722832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.731197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f9b30 00:26:00.122 [2024-07-15 18:51:16.731983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.732002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.740341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:00.122 [2024-07-15 18:51:16.741115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.741133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.748735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ebb98 00:26:00.122 [2024-07-15 18:51:16.749504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.749523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.758949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e1f80 00:26:00.122 [2024-07-15 18:51:16.759848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.759868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.768657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2d80 00:26:00.122 [2024-07-15 18:51:16.769762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.769781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.777725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190de470 00:26:00.122 [2024-07-15 18:51:16.778483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.778502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.785997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eaef0 00:26:00.122 [2024-07-15 18:51:16.786956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.786974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.796918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e6738 00:26:00.122 [2024-07-15 18:51:16.798270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.798287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.805043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fa7d8 00:26:00.122 [2024-07-15 18:51:16.805910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.805933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.813721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f8a50 00:26:00.122 [2024-07-15 18:51:16.815056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.815075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:00.122 [2024-07-15 18:51:16.821707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e95a0 00:26:00.122 [2024-07-15 18:51:16.822328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.122 [2024-07-15 18:51:16.822345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:00.381 [2024-07-15 18:51:16.831530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e4578 00:26:00.381 [2024-07-15 18:51:16.832382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.381 [2024-07-15 18:51:16.832400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:00.381 [2024-07-15 18:51:16.842485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f7da8 00:26:00.381 [2024-07-15 18:51:16.843806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.381 [2024-07-15 18:51:16.843823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:00.381 [2024-07-15 18:51:16.851024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f8618 00:26:00.381 [2024-07-15 18:51:16.851881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.381 [2024-07-15 18:51:16.851899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:00.381 [2024-07-15 18:51:16.860359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ebfd0 00:26:00.381 [2024-07-15 18:51:16.861088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.381 [2024-07-15 18:51:16.861106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:00.381 [2024-07-15 18:51:16.870902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e7c50 00:26:00.381 [2024-07-15 18:51:16.872454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.381 [2024-07-15 18:51:16.872472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:00.381 [2024-07-15 18:51:16.877370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ec840 00:26:00.381 [2024-07-15 18:51:16.878075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.381 [2024-07-15 18:51:16.878092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:00.381 [2024-07-15 18:51:16.886046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e23b8 00:26:00.382 [2024-07-15 18:51:16.886752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.886769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.896239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f8e88 00:26:00.382 [2024-07-15 18:51:16.896982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.897000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.905662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6cc8 00:26:00.382 [2024-07-15 18:51:16.906610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.906628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.914322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e1b48 00:26:00.382 [2024-07-15 18:51:16.915260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.915277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.924520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2948 00:26:00.382 [2024-07-15 18:51:16.925506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.925523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.933948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eaef0 00:26:00.382 [2024-07-15 18:51:16.935139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.935157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.941437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f3a28 00:26:00.382 [2024-07-15 18:51:16.942044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.942062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.950831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fa7d8 00:26:00.382 [2024-07-15 18:51:16.951675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.951694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.959429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e12d8 00:26:00.382 [2024-07-15 18:51:16.960268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.960287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.969242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ed4e8 00:26:00.382 [2024-07-15 18:51:16.970194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.970211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.978890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190de8a8 00:26:00.382 [2024-07-15 18:51:16.979996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.980014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.988594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fc560 00:26:00.382 [2024-07-15 18:51:16.989772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.989790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:16.997142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e12d8 00:26:00.382 [2024-07-15 18:51:16.997952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:16.997970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.006157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190feb58 00:26:00.382 [2024-07-15 18:51:17.006962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.006980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.015304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0ff8 00:26:00.382 [2024-07-15 18:51:17.016137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.016155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.024478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:00.382 [2024-07-15 18:51:17.025335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.025353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.033690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0bc0 00:26:00.382 [2024-07-15 18:51:17.034518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.034535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.042843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6458 00:26:00.382 [2024-07-15 18:51:17.043670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.043691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.051988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190edd58 00:26:00.382 [2024-07-15 18:51:17.052816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.052834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.061148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fcdd0 00:26:00.382 [2024-07-15 18:51:17.061976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.061994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.070365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e27f0 00:26:00.382 [2024-07-15 18:51:17.071166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.071183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.382 [2024-07-15 18:51:17.079505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e4578 00:26:00.382 [2024-07-15 18:51:17.080311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.382 [2024-07-15 18:51:17.080328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.641 [2024-07-15 18:51:17.088898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eb760 00:26:00.641 [2024-07-15 18:51:17.089752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.641 [2024-07-15 18:51:17.089770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.098213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ec840 00:26:00.642 [2024-07-15 18:51:17.099048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.099065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.107346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e9e10 00:26:00.642 [2024-07-15 18:51:17.108135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.108152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.116498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e8d30 00:26:00.642 [2024-07-15 18:51:17.117331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.117349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.125667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e7c50 00:26:00.642 [2024-07-15 18:51:17.126496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.126513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.134808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f35f0 00:26:00.642 [2024-07-15 18:51:17.135635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.135653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.143965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f9f68 00:26:00.642 [2024-07-15 18:51:17.144775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.144793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.153079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ff3c8 00:26:00.642 [2024-07-15 18:51:17.153931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.153949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.162223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190df988 00:26:00.642 [2024-07-15 18:51:17.163029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.163047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.171368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f5378 00:26:00.642 [2024-07-15 18:51:17.172171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.172188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.180491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f1868 00:26:00.642 [2024-07-15 18:51:17.181328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.181346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.189696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6020 00:26:00.642 [2024-07-15 18:51:17.190506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.190524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.198831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e1f80 00:26:00.642 [2024-07-15 18:51:17.199639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.199656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.207889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fd208 00:26:00.642 [2024-07-15 18:51:17.208721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.208738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.217071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ddc00 00:26:00.642 [2024-07-15 18:51:17.217914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.217933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.226403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e49b0 00:26:00.642 [2024-07-15 18:51:17.227259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.227277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.235701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e01f8 00:26:00.642 [2024-07-15 18:51:17.236568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.236587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.244950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ec408 00:26:00.642 [2024-07-15 18:51:17.245765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.245783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.254117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e6738 00:26:00.642 [2024-07-15 18:51:17.254971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.254989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.263277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e1710 00:26:00.642 [2024-07-15 18:51:17.264110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.264128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.272416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e88f8 00:26:00.642 [2024-07-15 18:51:17.273230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.273247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.281622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e7818 00:26:00.642 [2024-07-15 18:51:17.282486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.282509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.290825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f3a28 00:26:00.642 [2024-07-15 18:51:17.291617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.291634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.299973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e12d8 00:26:00.642 [2024-07-15 18:51:17.300785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.300802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.309106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190feb58 00:26:00.642 [2024-07-15 18:51:17.309940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.309957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.318260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0ff8 00:26:00.642 [2024-07-15 18:51:17.319052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.319069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.327395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:00.642 [2024-07-15 18:51:17.328206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.642 [2024-07-15 18:51:17.328223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.642 [2024-07-15 18:51:17.336680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0bc0 00:26:00.643 [2024-07-15 18:51:17.337510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.643 [2024-07-15 18:51:17.337527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.643 [2024-07-15 18:51:17.347167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6458 00:26:00.902 [2024-07-15 18:51:17.348510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.348528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.355911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2d80 00:26:00.902 [2024-07-15 18:51:17.356900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.356918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.366152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ee5c8 00:26:00.902 [2024-07-15 18:51:17.367585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.367605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.375761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f96f8 00:26:00.902 [2024-07-15 18:51:17.377276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.377293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.382220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f3a28 00:26:00.902 [2024-07-15 18:51:17.382931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.382949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.391531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fb048 00:26:00.902 [2024-07-15 18:51:17.392233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.392251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.400684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e95a0 00:26:00.902 [2024-07-15 18:51:17.401399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.401417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.409824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fb480 00:26:00.902 [2024-07-15 18:51:17.410529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.410546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.418971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe720 00:26:00.902 [2024-07-15 18:51:17.419674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.902 [2024-07-15 18:51:17.419692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.902 [2024-07-15 18:51:17.428032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190dece0 00:26:00.903 [2024-07-15 18:51:17.428738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.428755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.437188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0bc0 00:26:00.903 [2024-07-15 18:51:17.437894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.437913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.446358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:00.903 [2024-07-15 18:51:17.447043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.447061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.455496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0ff8 00:26:00.903 [2024-07-15 18:51:17.456176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.456194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.464658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6890 00:26:00.903 [2024-07-15 18:51:17.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.465393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.473837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ed920 00:26:00.903 [2024-07-15 18:51:17.474486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.474506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.483521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fcdd0 00:26:00.903 [2024-07-15 18:51:17.484347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.493505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f8a50 00:26:00.903 [2024-07-15 18:51:17.494378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.494396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.502964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f20d8 00:26:00.903 [2024-07-15 18:51:17.503766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.503783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.512343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f9b30 00:26:00.903 [2024-07-15 18:51:17.513380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.513398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.521469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f96f8 00:26:00.903 [2024-07-15 18:51:17.522478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.522496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.530634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e6b70 00:26:00.903 [2024-07-15 18:51:17.531670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.531689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.540079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ff3c8 00:26:00.903 [2024-07-15 18:51:17.541178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.541196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.550505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190df550 00:26:00.903 [2024-07-15 18:51:17.552007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.552025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.556982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190de8a8 00:26:00.903 [2024-07-15 18:51:17.557718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.557736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.566281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fa3a0 00:26:00.903 [2024-07-15 18:51:17.566933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.566951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.575404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fcdd0 00:26:00.903 [2024-07-15 18:51:17.576113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.576131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.584430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e27f0 00:26:00.903 [2024-07-15 18:51:17.585152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.585170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.593553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e4578 00:26:00.903 [2024-07-15 18:51:17.594285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.594303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.903 [2024-07-15 18:51:17.602730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eb760 00:26:00.903 [2024-07-15 18:51:17.603432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.903 [2024-07-15 18:51:17.603453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.612159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe2e8 00:26:01.163 [2024-07-15 18:51:17.612923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.612942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.621410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eee38 00:26:01.163 [2024-07-15 18:51:17.622062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.622079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.630514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190df118 00:26:01.163 [2024-07-15 18:51:17.631149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.631166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.639936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:01.163 [2024-07-15 18:51:17.640446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.640463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.649279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe2e8 00:26:01.163 [2024-07-15 18:51:17.650116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.650134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.657767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f8618 00:26:01.163 [2024-07-15 18:51:17.658576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.658593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.667396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f4298 00:26:01.163 [2024-07-15 18:51:17.668341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.668359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.676728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190df988 00:26:01.163 [2024-07-15 18:51:17.677653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.677671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.685677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e84c0 00:26:01.163 [2024-07-15 18:51:17.686602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.686619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.695884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190df118 00:26:01.163 [2024-07-15 18:51:17.696998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.697016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.705078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fdeb0 00:26:01.163 [2024-07-15 18:51:17.706199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.706217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.714292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e6fa8 00:26:01.163 [2024-07-15 18:51:17.715355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.715373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.723414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ef270 00:26:01.163 [2024-07-15 18:51:17.724498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.724516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.732828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e3060 00:26:01.163 [2024-07-15 18:51:17.733945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.733963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.742254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e5658 00:26:01.163 [2024-07-15 18:51:17.743364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.743381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.751487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f5be8 00:26:01.163 [2024-07-15 18:51:17.752587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.752605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.760714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f1ca0 00:26:01.163 [2024-07-15 18:51:17.761772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.761789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.770067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f57b0 00:26:01.163 [2024-07-15 18:51:17.771164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.771182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.779252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ea680 00:26:01.163 [2024-07-15 18:51:17.780288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.780306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.788398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e3d08 00:26:01.163 [2024-07-15 18:51:17.789394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.163 [2024-07-15 18:51:17.789411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.163 [2024-07-15 18:51:17.797766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f8a50 00:26:01.164 [2024-07-15 18:51:17.798824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.798843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.164 [2024-07-15 18:51:17.806903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f96f8 00:26:01.164 [2024-07-15 18:51:17.807984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.808001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.164 [2024-07-15 18:51:17.816043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e1b48 00:26:01.164 [2024-07-15 18:51:17.817126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.817144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.164 [2024-07-15 18:51:17.825190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e5a90 00:26:01.164 [2024-07-15 18:51:17.826251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.826269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.164 [2024-07-15 18:51:17.834340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f81e0 00:26:01.164 [2024-07-15 18:51:17.835327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.835346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.164 [2024-07-15 18:51:17.843481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ec840 00:26:01.164 [2024-07-15 18:51:17.844473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.844495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.164 [2024-07-15 18:51:17.852618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ed920 00:26:01.164 [2024-07-15 18:51:17.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.853625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.164 [2024-07-15 18:51:17.861794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe720 00:26:01.164 [2024-07-15 18:51:17.862804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.164 [2024-07-15 18:51:17.862822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.422 [2024-07-15 18:51:17.871530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e4578 00:26:01.422 [2024-07-15 18:51:17.872755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.422 [2024-07-15 18:51:17.872772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.422 [2024-07-15 18:51:17.879146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fd208 00:26:01.422 [2024-07-15 18:51:17.879763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.422 [2024-07-15 18:51:17.879781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:01.422 [2024-07-15 18:51:17.889491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e0630 00:26:01.422 [2024-07-15 18:51:17.890674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.890693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.899055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e1b48 00:26:01.423 [2024-07-15 18:51:17.900394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.900412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.907600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fbcf0 00:26:01.423 [2024-07-15 18:51:17.908454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.908472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.916868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f4f40 00:26:01.423 [2024-07-15 18:51:17.917608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.917626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.926233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ec840 00:26:01.423 [2024-07-15 18:51:17.927298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.927316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.935389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe2e8 00:26:01.423 [2024-07-15 18:51:17.936448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.936466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.944525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e27f0 00:26:01.423 [2024-07-15 18:51:17.945571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.945588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.953690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eee38 00:26:01.423 [2024-07-15 18:51:17.954769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.954787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.962846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e6738 00:26:01.423 [2024-07-15 18:51:17.963934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.963951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.971995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e49b0 00:26:01.423 [2024-07-15 18:51:17.973057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.973074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.981143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fd640 00:26:01.423 [2024-07-15 18:51:17.982269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.982287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.990472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f4298 00:26:01.423 [2024-07-15 18:51:17.991587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:17.991605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:17.999780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ebb98 00:26:01.423 [2024-07-15 18:51:18.000902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.000920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.009051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6890 00:26:01.423 [2024-07-15 18:51:18.010166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.010184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.018255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e0630 00:26:01.423 [2024-07-15 18:51:18.019332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.019349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.027430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f3e60 00:26:01.423 [2024-07-15 18:51:18.028492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.028510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.036579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fac10 00:26:01.423 [2024-07-15 18:51:18.037659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.037676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.045714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f9b30 00:26:01.423 [2024-07-15 18:51:18.046792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.046810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.054860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e73e0 00:26:01.423 [2024-07-15 18:51:18.055953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.055970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.064008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e8d30 00:26:01.423 [2024-07-15 18:51:18.064984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.065001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.072280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ea680 00:26:01.423 [2024-07-15 18:51:18.073696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.073713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.081556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ff3c8 00:26:01.423 [2024-07-15 18:51:18.082616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.082636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.091170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ee190 00:26:01.423 [2024-07-15 18:51:18.092343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.092361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.099273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eaab8 00:26:01.423 [2024-07-15 18:51:18.099750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.099767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.108540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f7da8 00:26:01.423 [2024-07-15 18:51:18.109350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.109368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.117116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ebb98 00:26:01.423 [2024-07-15 18:51:18.117942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.423 [2024-07-15 18:51:18.117960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:01.423 [2024-07-15 18:51:18.128121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:01.682 [2024-07-15 18:51:18.129444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.129462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.136404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e27f0 00:26:01.682 [2024-07-15 18:51:18.137016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.137034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.145726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f1ca0 00:26:01.682 [2024-07-15 18:51:18.146680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.146699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.154816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e01f8 00:26:01.682 [2024-07-15 18:51:18.155384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.155402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.164381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f20d8 00:26:01.682 [2024-07-15 18:51:18.165074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.165095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.173952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe720 00:26:01.682 [2024-07-15 18:51:18.174764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.174782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.183185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0bc0 00:26:01.682 [2024-07-15 18:51:18.184285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.184303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.192241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ee190 00:26:01.682 [2024-07-15 18:51:18.193040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.193059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.200436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe720 00:26:01.682 [2024-07-15 18:51:18.201445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.201462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.210635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ed920 00:26:01.682 [2024-07-15 18:51:18.211690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.211708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.220090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fb8b8 00:26:01.682 [2024-07-15 18:51:18.221352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.221370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.682 [2024-07-15 18:51:18.228745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e73e0 00:26:01.682 [2024-07-15 18:51:18.230003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.682 [2024-07-15 18:51:18.230020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.238290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fcdd0 00:26:01.683 [2024-07-15 18:51:18.239737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.239756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.247015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eea00 00:26:01.683 [2024-07-15 18:51:18.247968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.247987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.255505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e1b48 00:26:01.683 [2024-07-15 18:51:18.256534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.256552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.265964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2d80 00:26:01.683 [2024-07-15 18:51:18.267013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.267031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.275410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eea00 00:26:01.683 [2024-07-15 18:51:18.276657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.276675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.284048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e0a68 00:26:01.683 [2024-07-15 18:51:18.285278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.285296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.292208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fe2e8 00:26:01.683 [2024-07-15 18:51:18.292755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.292774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.301465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e23b8 00:26:01.683 [2024-07-15 18:51:18.302339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.302356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.310660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f8e88 00:26:01.683 [2024-07-15 18:51:18.311193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.311211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.319689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e27f0 00:26:01.683 [2024-07-15 18:51:18.320212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.320234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.328047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6020 00:26:01.683 [2024-07-15 18:51:18.328763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.328780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.338247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:01.683 [2024-07-15 18:51:18.339024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.339041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.347516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e49b0 00:26:01.683 [2024-07-15 18:51:18.348311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.348329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.356671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ef270 00:26:01.683 [2024-07-15 18:51:18.357457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.365836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e5ec8 00:26:01.683 [2024-07-15 18:51:18.366622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.366640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.375282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fc128 00:26:01.683 [2024-07-15 18:51:18.376266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.376284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.683 [2024-07-15 18:51:18.384534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190dece0 00:26:01.683 [2024-07-15 18:51:18.385464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.683 [2024-07-15 18:51:18.385483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.393955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ecc78 00:26:01.942 [2024-07-15 18:51:18.394858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.394875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.403152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e84c0 00:26:01.942 [2024-07-15 18:51:18.404058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.404078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.412307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fb480 00:26:01.942 [2024-07-15 18:51:18.413200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.413218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.421449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eb760 00:26:01.942 [2024-07-15 18:51:18.422351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.422369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.430887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e88f8 00:26:01.942 [2024-07-15 18:51:18.432004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.432022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.439939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ea248 00:26:01.942 [2024-07-15 18:51:18.440701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.440719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.450428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190feb58 00:26:01.942 [2024-07-15 18:51:18.452007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.452024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.456888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0bc0 00:26:01.942 [2024-07-15 18:51:18.457626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.457644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.465570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f6890 00:26:01.942 [2024-07-15 18:51:18.466298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.466316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.475633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e49b0 00:26:01.942 [2024-07-15 18:51:18.476500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.942 [2024-07-15 18:51:18.476517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:01.942 [2024-07-15 18:51:18.484297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e3d08 00:26:01.942 [2024-07-15 18:51:18.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.485205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.494567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e6fa8 00:26:01.943 [2024-07-15 18:51:18.495499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.495517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.504264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:01.943 [2024-07-15 18:51:18.505411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.505428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.513030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fac10 00:26:01.943 [2024-07-15 18:51:18.514132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.514149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.521532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e84c0 00:26:01.943 [2024-07-15 18:51:18.522169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.522187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.530850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e99d8 00:26:01.943 [2024-07-15 18:51:18.531373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.531391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.541366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0788 00:26:01.943 [2024-07-15 18:51:18.542707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.542724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.550938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190df118 00:26:01.943 [2024-07-15 18:51:18.552396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.552413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.560277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eee38 00:26:01.943 [2024-07-15 18:51:18.561733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.561751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.567935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f5be8 00:26:01.943 [2024-07-15 18:51:18.568600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.568618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.576071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e4578 00:26:01.943 [2024-07-15 18:51:18.576920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.576937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.586985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e6300 00:26:01.943 [2024-07-15 18:51:18.588317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.588334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.596529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190fcdd0 00:26:01.943 [2024-07-15 18:51:18.598042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.598059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.605072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f0bc0 00:26:01.943 [2024-07-15 18:51:18.606096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.606113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.613548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e49b0 00:26:01.943 [2024-07-15 18:51:18.614851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.614868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.621995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ef270 00:26:01.943 [2024-07-15 18:51:18.622640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.622658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.631453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f9f68 00:26:01.943 [2024-07-15 18:51:18.632306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.632324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:01.943 [2024-07-15 18:51:18.640120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190eff18 00:26:01.943 [2024-07-15 18:51:18.640967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.943 [2024-07-15 18:51:18.640988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.202 [2024-07-15 18:51:18.649867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190de038 00:26:02.202 [2024-07-15 18:51:18.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.202 [2024-07-15 18:51:18.650884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.202 [2024-07-15 18:51:18.660901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190e9168 00:26:02.202 [2024-07-15 18:51:18.662347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.202 [2024-07-15 18:51:18.662365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.202 [2024-07-15 18:51:18.669027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f2510 00:26:02.202 [2024-07-15 18:51:18.669775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.202 [2024-07-15 18:51:18.669793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.202 [2024-07-15 18:51:18.678294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190f7da8 00:26:02.202 [2024-07-15 18:51:18.679377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.202 [2024-07-15 18:51:18.679395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.202 [2024-07-15 18:51:18.687349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f4d0) with pdu=0x2000190ec840 00:26:02.202 [2024-07-15 18:51:18.688077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.202 [2024-07-15 18:51:18.688095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.202 00:26:02.202 Latency(us) 00:26:02.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.202 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:02.202 nvme0n1 : 2.00 27735.73 108.34 0.00 0.00 4608.89 1823.61 13221.18 00:26:02.202 =================================================================================================================== 00:26:02.202 Total : 27735.73 108.34 0.00 0.00 4608.89 1823.61 13221.18 00:26:02.202 0 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:02.202 | .driver_specific 00:26:02.202 | .nvme_error 00:26:02.202 | .status_code 00:26:02.202 | .command_transient_transport_error' 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1237301 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1237301 ']' 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1237301 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.202 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1237301 00:26:02.462 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:02.462 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:02.462 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1237301' 00:26:02.462 killing process with pid 1237301 00:26:02.462 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1237301 00:26:02.462 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.462 00:26:02.462 Latency(us) 00:26:02.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.462 =================================================================================================================== 00:26:02.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.462 18:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1237301 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1237889 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1237889 /var/tmp/bperf.sock 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1237889 ']' 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.462 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.721 [2024-07-15 18:51:19.170697] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:02.721 [2024-07-15 18:51:19.170746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237889 ] 00:26:02.721 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.721 Zero copy mechanism will not be used. 00:26:02.721 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.721 [2024-07-15 18:51:19.224165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.721 [2024-07-15 18:51:19.303482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.289 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.289 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:03.289 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.289 18:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.547 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:03.547 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.547 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.547 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.547 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.547 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.805 nvme0n1 00:26:03.805 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:03.805 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.805 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.806 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.806 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:03.806 18:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.065 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.065 Zero copy mechanism will not be used. 00:26:04.065 Running I/O for 2 seconds... 00:26:04.065 [2024-07-15 18:51:20.552024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.552429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.552458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.556858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.557235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.557257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.561568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.561956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.561978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.566203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.566588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.566610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.570817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.571197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.571218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.575430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.575802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.575821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.579908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.580279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.580298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.584862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.585247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.585265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.589947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.590330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.590349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.594579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.594956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.594975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.599195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.599583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.599602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.603845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.604216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.604241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.608365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.608733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.608756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.612906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.613295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.613314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.617485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.617866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.617885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.622432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.622802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.622821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.627451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.627807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.627826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.632947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.633321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.633339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.637643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.638023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.638041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.642489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.642847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.642866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.647269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.647650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.647670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.652048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.652432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.652454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.656924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.657309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.657328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.661799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.065 [2024-07-15 18:51:20.662168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.065 [2024-07-15 18:51:20.662187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.065 [2024-07-15 18:51:20.666460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.666850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.666869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.671148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.671545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.671564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.675871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.676254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.676273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.680608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.680987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.681005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.685413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.685788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.685807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.690272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.690684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.690703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.695272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.695667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.695687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.700210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.700605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.700625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.705862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.706254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.706273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.711531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.711905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.711924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.718048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.718440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.718460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.723798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.724177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.724196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.730164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.730553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.730572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.736748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.737164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.737183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.743477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.743861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.743884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.749759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.750143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.750163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.756433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.756806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.756826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.762585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.762956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.762975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.066 [2024-07-15 18:51:20.769191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.066 [2024-07-15 18:51:20.769572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.066 [2024-07-15 18:51:20.769592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.776184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.776560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.776579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.782543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.782901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.782920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.788704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.789067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.789086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.795515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.795868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.795886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.801786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.802155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.802174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.808381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.808743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.808762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.814919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.815339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.815357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.821143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.821537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.821555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.827481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.827854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.827872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.833456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.833818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.833836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.839001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.839368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.839387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.844184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.844537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.844556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.849978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.850341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.850360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.855584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.855952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.855970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.860956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.861313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.861331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.866514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.866878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.866896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.871930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.872290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.872308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.877112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.877457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.877476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.882354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.882712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.882730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.887808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.888169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.888187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.893582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.893932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.893951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.337 [2024-07-15 18:51:20.898356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.337 [2024-07-15 18:51:20.898719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.337 [2024-07-15 18:51:20.898742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.902987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.903326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.903344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.908257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.908604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.908622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.913131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.913471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.913489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.917684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.918018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.918036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.922187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.922525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.922543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.926587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.926908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.926926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.931482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.931816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.931834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.937342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.937768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.937786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.943857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.944191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.944209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.950027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.950437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.950455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.957134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.957497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.957515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.964280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.964711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.964729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.971930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.972336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.979452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.979853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.979871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.987247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.987638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.987655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.993789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.994124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.994143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:20.998817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:20.999119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:20.999137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.003627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.003920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.003938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.008093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.008383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.008401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.012143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.012386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.012403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.015911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.016133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.016152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.019691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.019912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.019930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.023443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.023655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.023673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.027206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.027438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.027456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.338 [2024-07-15 18:51:21.031200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.338 [2024-07-15 18:51:21.031441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.338 [2024-07-15 18:51:21.031459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.036100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.036343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.036368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.041243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.041467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.041488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.045885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.046113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.046138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.050261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.050489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.050508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.054621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.054840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.054859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.058932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.059153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.059172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.063183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.063422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.063441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.067567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.067795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.067814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.071510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.071740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.071759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.075385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.075614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.075632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.079270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.079503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.079522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.083128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.083378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.087349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.087564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.087583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.092424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.092676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.092694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.097164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.097526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.097543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.101542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.101666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.101687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.105876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.105963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.105980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.110231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.110314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.110331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.114515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.114587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.114605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.118849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.118915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.118933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.123367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.123439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.123457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.127648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.654 [2024-07-15 18:51:21.127757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.654 [2024-07-15 18:51:21.127774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.654 [2024-07-15 18:51:21.132245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.132306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.132322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.136587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.136649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.136666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.140913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.141010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.141028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.145268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.145354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.145372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.149530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.149633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.149654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.153675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.153761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.153779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.157907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.158019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.158041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.162340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.162453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.162470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.166843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.166942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.166959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.171084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.171149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.171166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.175124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.175236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.175253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.179467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.179556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.179573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.183785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.183889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.183906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.187543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.187633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.187650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.191281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.191371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.191388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.195073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.195178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.195195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.198824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.198911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.198928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.202552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.202649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.202666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.206274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.206372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.206388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.209954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.210046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.210063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.213643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.213753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.213770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.217362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.217471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.217488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.221091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.221182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.221199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.224800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.224900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.655 [2024-07-15 18:51:21.224917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.655 [2024-07-15 18:51:21.228606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.655 [2024-07-15 18:51:21.228721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.228739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.232733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.232870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.232888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.238262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.238431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.238449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.244045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.244202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.244220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.250498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.250621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.250640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.257919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.258095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.258113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.265554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.265722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.265744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.273502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.273641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.273659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.281545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.281729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.281748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.289137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.289327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.289345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.296934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.297121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.297140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.304833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.305069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.305087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.312710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.312856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.312874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.320704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.320883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.320902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.328595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.328792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.328813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.336652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.336773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.343500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.343631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.343649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.348954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.349064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.349082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.656 [2024-07-15 18:51:21.354357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.656 [2024-07-15 18:51:21.354517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.656 [2024-07-15 18:51:21.354536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.359788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.359895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.359912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.366140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.366255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.366272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.372786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.372909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.372927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.378894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.379019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.379037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.384514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.384584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.384601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.391990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.392146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.392163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.398608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.398811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.398829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.405974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.406099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.406117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.414123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.916 [2024-07-15 18:51:21.414318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.916 [2024-07-15 18:51:21.414336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.916 [2024-07-15 18:51:21.421310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.421475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.421493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.428452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.428548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.428565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.434732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.434890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.434908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.441670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.441855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.441872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.448293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.448463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.448486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.455062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.455239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.455258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.463836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.464045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.464064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.472255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.472446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.472464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.478202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.478335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.478354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.483988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.484100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.484117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.490190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.490321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.490339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.495624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.495739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.495756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.500896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.501014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.501032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.505416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.505518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.505539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.509728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.509832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.509848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.513913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.514031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.514051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.519788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.519879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.519897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.523997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.524095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.524112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.527808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.527896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.527913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.531847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.531945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.531962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.535464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.535575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.535597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.539059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.539190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.539209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.543270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.543382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.543400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.547364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.547474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.547492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.553145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.553329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.553346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.560950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.561103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.561121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.566486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.566560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.572262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.572329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.572347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.577336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.577446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.917 [2024-07-15 18:51:21.577462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.917 [2024-07-15 18:51:21.581750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.917 [2024-07-15 18:51:21.581850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.581868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.586195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.586354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.586373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.590365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.590468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.590485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.594679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.594961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.594979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.599202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.599329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.599347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.602994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.603091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.603108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.607531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.607681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.607699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.611416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.611500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.611517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.615599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.615656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.615673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.918 [2024-07-15 18:51:21.620946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:04.918 [2024-07-15 18:51:21.621007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.918 [2024-07-15 18:51:21.621025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.625705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.625815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.625835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.629865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.629979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.630000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.634325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.634434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.634451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.638412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.638567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.638585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.642546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.642645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.642662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.647118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.647193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.647210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.652241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.652304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.652322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.656414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.656498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.178 [2024-07-15 18:51:21.656515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.178 [2024-07-15 18:51:21.660115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.178 [2024-07-15 18:51:21.660213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.660236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.668682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.668914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.668932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.675600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.675682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.675699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.681882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.681947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.681964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.687965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.688046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.688063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.692680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.692756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.692773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.696901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.697039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.697057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.700896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.700957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.700974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.705083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.705180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.705197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.709364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.709476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.709493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.717575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.717856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.717873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.725518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.725601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.725618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.731643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.731812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.731830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.737294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.737359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.737376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.741951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.742066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.742082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.746021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.746148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.746166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.749916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.750008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.750025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.753845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.753948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.753966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.757864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.757972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.757992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.761799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.761881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.761899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.766216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.766312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.766330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.771466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.771547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.771565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.776047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.776185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.776204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.780369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.780456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.780473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.784878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.784955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.784972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.789171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.789300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.789319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.793542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.793657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.793675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.797811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.797909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.797926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.801825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.801920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.801938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.179 [2024-07-15 18:51:21.805690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.179 [2024-07-15 18:51:21.805786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.179 [2024-07-15 18:51:21.805805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.809507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.809594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.809613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.813358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.813472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.813490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.817187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.817299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.817317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.821027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.821125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.821142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.824887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.824988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.825005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.828717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.828859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.828878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.832545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.832644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.832662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.836339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.836446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.836463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.840174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.840279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.840297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.843934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.844048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.844066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.847753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.847867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.847885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.851472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.851589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.851626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.855210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.855309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.855327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.858954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.859088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.859107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.862689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.862794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.862815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.866454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.866583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.866600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.870205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.870300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.870317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.873927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.874024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.874041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.878057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.878179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.878197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.180 [2024-07-15 18:51:21.881987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.180 [2024-07-15 18:51:21.882082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.180 [2024-07-15 18:51:21.882100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.885804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.885920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.885938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.889623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.889728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.889746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.893427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.893576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.897844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.897928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.897945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.901648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.901738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.901756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.905448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.905563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.905582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.909188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.909283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.909301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.912928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.913028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.913044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.916668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.916778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.916796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.920435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.920541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.920558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.924161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.924253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.924270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.927909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.928023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.928044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.931593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.931725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.931743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.935351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.935444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.935461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.939067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.939169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.939186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.942788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.942884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.942901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.946513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.946623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.946645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.950196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.950306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.950323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.953913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.954027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.441 [2024-07-15 18:51:21.954045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.441 [2024-07-15 18:51:21.957633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.441 [2024-07-15 18:51:21.957751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.957769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.961740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.961846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.961865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.966968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.967323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.967341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.971849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.971931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.971949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.976003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.976099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.976116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.980195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.980289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.980306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.984589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.984742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.984760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.988794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.988894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.988911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.993290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.993441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.993459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:21.997684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:21.997785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:21.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.001536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.001639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.001656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.005319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.005412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.005430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.009091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.009217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.009242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.013011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.013112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.013129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.017617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.017699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.017716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.022500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.022595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.022614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.026680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.026798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.026819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.031252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.031349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.031366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.035657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.035745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.035763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.039887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.040011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.040029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.044162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.044264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.044281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.048446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.048541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.048559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.052710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.052778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.052795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.056874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.056972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.056989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.061013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.061115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.061132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.065289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.065377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.065394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.069711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.069773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.069791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.074165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.074310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.074334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.442 [2024-07-15 18:51:22.078400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.442 [2024-07-15 18:51:22.078471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.442 [2024-07-15 18:51:22.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.082609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.082714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.082732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.086950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.087110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.087128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.091250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.091370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.091388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.095519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.095602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.095619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.099769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.099878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.099895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.104243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.104337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.104355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.109002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.109117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.109134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.114711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.114878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.114897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.121734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.121935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.121953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.129056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.129221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.129245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.136120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.136293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.136311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.443 [2024-07-15 18:51:22.143316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.443 [2024-07-15 18:51:22.143483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.443 [2024-07-15 18:51:22.143502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.150630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.150819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.150838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.158433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.158584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.158610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.165638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.165802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.165820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.172930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.173061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.173079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.180699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.180800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.180818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.188394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.188623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.188641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.196259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.196454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.196474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.203872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.204033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.204051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.211173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.211312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.211332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.218650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.218835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.218854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.228237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.228512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.228530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.235371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.235523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.235541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.242001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.242099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.242120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.247663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.247769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.247786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.252353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.252437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.252454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.256382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.256453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.256470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.260366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.260480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.260497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.264253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.264323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.264340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.268085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.268171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.268189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.272110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.272210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.272232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.276896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.704 [2024-07-15 18:51:22.276995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.704 [2024-07-15 18:51:22.277012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.704 [2024-07-15 18:51:22.284296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.284582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.284601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.293200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.293280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.293297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.299279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.299394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.299412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.305534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.305673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.305692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.310631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.310699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.310715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.315890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.315987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.316004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.320949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.321064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.321083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.326755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.326849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.326866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.332386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.332496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.332513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.338154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.338299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.338318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.343763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.343840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.343858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.349086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.349170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.349187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.354214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.354296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.354313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.359532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.359608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.359625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.364560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.364703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.364721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.369555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.369801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.369819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.374602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.374682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.374701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.379315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.379386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.379407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.383303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.383457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.383476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.387161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.387319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.387337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.390991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.391104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.391124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.394839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.394976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.394995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.398675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.398770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.398789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.402477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.402581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.402600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.705 [2024-07-15 18:51:22.406290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.705 [2024-07-15 18:51:22.406386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.705 [2024-07-15 18:51:22.406405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.410041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.410130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.410147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.413808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.413913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.413931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.417581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.417684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.417701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.421628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.421736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.421753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.426247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.426332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.426349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.431220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.431349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.431367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.435783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.435876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.435893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.440177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.440275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.440292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.444168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.444267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.444284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.448052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.448150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.448168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.451868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.451978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.451995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.455693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.455770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.455787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.459523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.459659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.459676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.463732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.463857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.463875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.467599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.467753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.467771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.471386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.471474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.471491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.475145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.475252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.475269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.478914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.479022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.479039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.482661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.482759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.486511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.486628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.486649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.491317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.491432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.491451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.496362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.496466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.496483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.500938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.501060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.501077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.505403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.505535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.505553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.509819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.509934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.509952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.514544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.514647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.514664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 18:51:22.518708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 18:51:22.518847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.966 [2024-07-15 18:51:22.518865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.522987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.523049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.523067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.526900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.526983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.527000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.530722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.530803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.530821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.534535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.534630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.534647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.538320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.538401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.538417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.542329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.542425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.542442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.546661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.546784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.546803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.967 [2024-07-15 18:51:22.550534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x99f810) with pdu=0x2000190fef90 00:26:05.967 [2024-07-15 18:51:22.550645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.967 [2024-07-15 18:51:22.550663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.967 00:26:05.967 Latency(us) 00:26:05.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.967 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:05.967 nvme0n1 : 2.00 6233.35 779.17 0.00 0.00 2563.06 1738.13 10656.72 00:26:05.967 =================================================================================================================== 00:26:05.967 Total : 6233.35 779.17 0.00 0.00 2563.06 1738.13 10656.72 00:26:05.967 0 00:26:05.967 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:05.967 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:05.967 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:05.967 | .driver_specific 00:26:05.967 | .nvme_error 00:26:05.967 | .status_code 00:26:05.967 | .command_transient_transport_error' 00:26:05.967 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 402 > 0 )) 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1237889 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1237889 ']' 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1237889 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1237889 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1237889' 00:26:06.226 killing process with pid 1237889 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1237889 00:26:06.226 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.226 00:26:06.226 Latency(us) 00:26:06.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.226 =================================================================================================================== 00:26:06.226 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.226 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1237889 00:26:06.485 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1235400 00:26:06.485 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1235400 ']' 00:26:06.485 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1235400 00:26:06.485 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:06.485 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.485 18:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235400 00:26:06.485 18:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:06.485 18:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:06.485 18:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235400' 00:26:06.485 killing process with pid 1235400 00:26:06.485 18:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1235400 00:26:06.485 18:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1235400 00:26:06.744 00:26:06.744 real 0m16.946s 00:26:06.744 user 0m32.548s 00:26:06.744 sys 0m4.407s 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.744 ************************************ 00:26:06.744 END TEST nvmf_digest_error 00:26:06.744 ************************************ 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.744 rmmod nvme_tcp 00:26:06.744 rmmod nvme_fabrics 00:26:06.744 rmmod nvme_keyring 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1235400 ']' 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1235400 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1235400 ']' 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1235400 00:26:06.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1235400) - No such process 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1235400 is not found' 00:26:06.744 Process with pid 1235400 is not found 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.744 18:51:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.277 18:51:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.277 00:26:09.277 real 0m41.509s 00:26:09.277 user 1m6.518s 00:26:09.277 sys 0m12.938s 00:26:09.277 18:51:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.277 18:51:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:09.277 ************************************ 00:26:09.277 END TEST nvmf_digest 00:26:09.277 ************************************ 00:26:09.277 18:51:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:09.277 18:51:25 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:09.277 18:51:25 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:09.277 18:51:25 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:09.277 18:51:25 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:09.277 18:51:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:09.277 18:51:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.277 18:51:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:09.277 ************************************ 00:26:09.277 START TEST nvmf_bdevperf 00:26:09.277 ************************************ 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:09.277 * Looking for test storage... 00:26:09.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.277 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.278 18:51:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:14.547 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:14.547 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:14.547 Found net devices under 0000:86:00.0: cvl_0_0 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.547 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:14.548 Found net devices under 0000:86:00.1: cvl_0_1 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:14.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:26:14.548 00:26:14.548 --- 10.0.0.2 ping statistics --- 00:26:14.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.548 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:26:14.548 00:26:14.548 --- 10.0.0.1 ping statistics --- 00:26:14.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.548 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1242005 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1242005 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1242005 ']' 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:14.548 18:51:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.548 [2024-07-15 18:51:31.028254] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:14.548 [2024-07-15 18:51:31.028297] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.548 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.548 [2024-07-15 18:51:31.084528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:14.548 [2024-07-15 18:51:31.165336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.548 [2024-07-15 18:51:31.165371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.548 [2024-07-15 18:51:31.165377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.548 [2024-07-15 18:51:31.165383] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.548 [2024-07-15 18:51:31.165388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.548 [2024-07-15 18:51:31.165498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.548 [2024-07-15 18:51:31.165515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.548 [2024-07-15 18:51:31.165516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.484 [2024-07-15 18:51:31.878238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.484 Malloc0 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.484 [2024-07-15 18:51:31.940681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:15.484 { 00:26:15.484 "params": { 00:26:15.484 "name": "Nvme$subsystem", 00:26:15.484 "trtype": "$TEST_TRANSPORT", 00:26:15.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.484 "adrfam": "ipv4", 00:26:15.484 "trsvcid": "$NVMF_PORT", 00:26:15.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.484 "hdgst": ${hdgst:-false}, 00:26:15.484 "ddgst": ${ddgst:-false} 00:26:15.484 }, 00:26:15.484 "method": "bdev_nvme_attach_controller" 00:26:15.484 } 00:26:15.484 EOF 00:26:15.484 )") 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:15.484 18:51:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:15.484 "params": { 00:26:15.484 "name": "Nvme1", 00:26:15.484 "trtype": "tcp", 00:26:15.484 "traddr": "10.0.0.2", 00:26:15.484 "adrfam": "ipv4", 00:26:15.484 "trsvcid": "4420", 00:26:15.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.484 "hdgst": false, 00:26:15.484 "ddgst": false 00:26:15.484 }, 00:26:15.485 "method": "bdev_nvme_attach_controller" 00:26:15.485 }' 00:26:15.485 [2024-07-15 18:51:31.990819] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:15.485 [2024-07-15 18:51:31.990870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242115 ] 00:26:15.485 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.485 [2024-07-15 18:51:32.045388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.485 [2024-07-15 18:51:32.118974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.743 Running I/O for 1 seconds... 00:26:16.681 00:26:16.681 Latency(us) 00:26:16.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.681 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:16.681 Verification LBA range: start 0x0 length 0x4000 00:26:16.681 Nvme1n1 : 1.01 10855.30 42.40 0.00 0.00 11749.39 2208.28 15044.79 00:26:16.681 =================================================================================================================== 00:26:16.681 Total : 10855.30 42.40 0.00 0.00 11749.39 2208.28 15044.79 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1242412 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.939 { 00:26:16.939 "params": { 00:26:16.939 "name": "Nvme$subsystem", 00:26:16.939 "trtype": "$TEST_TRANSPORT", 00:26:16.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.939 "adrfam": "ipv4", 00:26:16.939 "trsvcid": "$NVMF_PORT", 00:26:16.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.939 "hdgst": ${hdgst:-false}, 00:26:16.939 "ddgst": ${ddgst:-false} 00:26:16.939 }, 00:26:16.939 "method": "bdev_nvme_attach_controller" 00:26:16.939 } 00:26:16.939 EOF 00:26:16.939 )") 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:16.939 18:51:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:16.939 "params": { 00:26:16.939 "name": "Nvme1", 00:26:16.939 "trtype": "tcp", 00:26:16.939 "traddr": "10.0.0.2", 00:26:16.939 "adrfam": "ipv4", 00:26:16.939 "trsvcid": "4420", 00:26:16.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.939 "hdgst": false, 00:26:16.939 "ddgst": false 00:26:16.939 }, 00:26:16.939 "method": "bdev_nvme_attach_controller" 00:26:16.939 }' 00:26:16.939 [2024-07-15 18:51:33.549264] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:16.939 [2024-07-15 18:51:33.549316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242412 ] 00:26:16.939 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.939 [2024-07-15 18:51:33.601932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.197 [2024-07-15 18:51:33.675198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.197 Running I/O for 15 seconds... 00:26:20.486 18:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1242005 00:26:20.486 18:51:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:20.486 [2024-07-15 18:51:36.519046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.486 [2024-07-15 18:51:36.519083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.486 [2024-07-15 18:51:36.519109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.486 [2024-07-15 18:51:36.519128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.486 [2024-07-15 18:51:36.519145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.486 [2024-07-15 18:51:36.519160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.486 [2024-07-15 18:51:36.519180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.486 [2024-07-15 18:51:36.519197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.486 [2024-07-15 18:51:36.519214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.486 [2024-07-15 18:51:36.519235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.486 [2024-07-15 18:51:36.519251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.486 [2024-07-15 18:51:36.519266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.486 [2024-07-15 18:51:36.519276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.487 [2024-07-15 18:51:36.519868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.487 [2024-07-15 18:51:36.519953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.487 [2024-07-15 18:51:36.519963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.519969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.519977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.519984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.519992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.519998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.488 [2024-07-15 18:51:36.520188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.488 [2024-07-15 18:51:36.520702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.488 [2024-07-15 18:51:36.520709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.520985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.520991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.489 [2024-07-15 18:51:36.521110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169bc70 is same with the state(5) to be set 00:26:20.489 [2024-07-15 18:51:36.521125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.489 [2024-07-15 18:51:36.521131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.489 [2024-07-15 18:51:36.521137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106664 len:8 PRP1 0x0 PRP2 0x0 00:26:20.489 [2024-07-15 18:51:36.521146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.489 [2024-07-15 18:51:36.521188] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x169bc70 was disconnected and freed. reset controller. 00:26:20.489 [2024-07-15 18:51:36.524038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.489 [2024-07-15 18:51:36.524087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.489 [2024-07-15 18:51:36.524594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-15 18:51:36.524609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.489 [2024-07-15 18:51:36.524620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.489 [2024-07-15 18:51:36.524798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.489 [2024-07-15 18:51:36.524974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.489 [2024-07-15 18:51:36.524982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.489 [2024-07-15 18:51:36.524989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.489 [2024-07-15 18:51:36.527823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.489 [2024-07-15 18:51:36.537352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.489 [2024-07-15 18:51:36.537812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-15 18:51:36.537855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.489 [2024-07-15 18:51:36.537877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.489 [2024-07-15 18:51:36.538316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.489 [2024-07-15 18:51:36.538493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.489 [2024-07-15 18:51:36.538501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.489 [2024-07-15 18:51:36.538508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.489 [2024-07-15 18:51:36.541347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.489 [2024-07-15 18:51:36.550391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.489 [2024-07-15 18:51:36.550809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-15 18:51:36.550825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.489 [2024-07-15 18:51:36.550832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.489 [2024-07-15 18:51:36.551003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.489 [2024-07-15 18:51:36.551174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.489 [2024-07-15 18:51:36.551182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.489 [2024-07-15 18:51:36.551188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.489 [2024-07-15 18:51:36.553878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.489 [2024-07-15 18:51:36.563243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.489 [2024-07-15 18:51:36.563725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-15 18:51:36.563741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.489 [2024-07-15 18:51:36.563748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.489 [2024-07-15 18:51:36.563919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.489 [2024-07-15 18:51:36.564090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.489 [2024-07-15 18:51:36.564101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.489 [2024-07-15 18:51:36.564107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.489 [2024-07-15 18:51:36.566793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.489 [2024-07-15 18:51:36.576127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.489 [2024-07-15 18:51:36.576603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-15 18:51:36.576620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.489 [2024-07-15 18:51:36.576626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.489 [2024-07-15 18:51:36.576798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.489 [2024-07-15 18:51:36.576972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.489 [2024-07-15 18:51:36.576979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.489 [2024-07-15 18:51:36.576986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.489 [2024-07-15 18:51:36.579711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.589113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.589448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.589464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.589471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.589642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.589817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.589824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.589830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.592533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.602066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.602545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.602561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.602568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.602739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.602913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.602921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.602927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.605617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.614944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.615401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.615417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.615424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.615596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.615767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.615774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.615780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.618517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.627772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.628203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.628218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.628231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.628417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.628588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.628596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.628602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.631277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.640696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.641148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.641164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.641171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.641349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.641521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.641528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.641534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.644210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.653535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.653974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.653989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.653996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.654170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.654347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.654356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.654362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.657034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.666366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.666810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.666844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.666866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.667406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.667578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.667586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.667592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.670268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.679333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.679762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.679804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.679825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.680405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.680578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.680586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.680592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.684479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.692837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.693311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.693353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.693374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.490 [2024-07-15 18:51:36.693954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.490 [2024-07-15 18:51:36.694547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.490 [2024-07-15 18:51:36.694572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.490 [2024-07-15 18:51:36.694608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.490 [2024-07-15 18:51:36.697320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.490 [2024-07-15 18:51:36.705624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.490 [2024-07-15 18:51:36.706087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-15 18:51:36.706129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.490 [2024-07-15 18:51:36.706151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.706634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.706806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.706814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.706820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.709497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.718495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.718949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.718964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.718971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.719142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.719320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.719328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.719334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.722006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.731395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.731856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.731898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.731919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.732500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.732673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.732680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.732687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.735363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.744246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.744661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.744679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.744685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.744847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.745009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.745017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.745022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.747707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.757042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.757500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.757516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.757522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.757694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.757869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.757876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.757882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.760573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.769844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.770320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.770337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.770344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.770522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.770699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.770707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.770714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.773543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.782897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.783324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.783341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.783348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.783524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.783709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.783717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.783723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.786553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.795990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.796451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.796467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.796474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.796646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.796817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.796825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.796831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.799633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.809043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.809527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.809543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.809550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.809728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.809905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.809913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.809919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.812720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.821904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.822343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.822386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.822408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.822913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.823086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.823093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.823099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.825874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.834764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.835139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.835181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.835203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.835717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.835894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.835902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.835908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.491 [2024-07-15 18:51:36.838641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.491 [2024-07-15 18:51:36.847627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.491 [2024-07-15 18:51:36.848081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-15 18:51:36.848123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.491 [2024-07-15 18:51:36.848145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.491 [2024-07-15 18:51:36.848656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.491 [2024-07-15 18:51:36.848829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.491 [2024-07-15 18:51:36.848836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.491 [2024-07-15 18:51:36.848843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.851515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.860490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.860962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.861003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.861025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.861612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.861784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.861792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.861798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.864474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.873329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.873838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.873880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.873909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.874505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.874968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.874976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.874982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.877655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.886160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.886611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.886626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.886633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.886805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.886976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.886984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.886990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.889709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.899073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.899531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.899546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.899552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.899723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.899894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.899901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.899907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.902597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.911955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.912405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.912421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.912428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.912599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.912770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.912780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.912786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.915487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.924764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.925248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.925291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.925313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.925891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.926380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.926388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.926395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.929130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.937752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.938195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.938247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.938269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.938851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.939322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.939330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.939336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.942010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.950586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.951058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.951073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.951080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.951259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.951432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.951439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.951445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.954123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.963464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.963912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.963928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.963934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.964105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.964283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.964291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.964298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.966973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.976324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.976829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.976845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.976851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.977023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.977194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.977201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.977207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.979886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:36.989254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:36.989676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:36.989691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:36.989697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:36.989859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:36.990021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:36.990028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:36.990034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.492 [2024-07-15 18:51:36.992730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.492 [2024-07-15 18:51:37.002082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.492 [2024-07-15 18:51:37.002535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-15 18:51:37.002551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.492 [2024-07-15 18:51:37.002561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.492 [2024-07-15 18:51:37.002732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.492 [2024-07-15 18:51:37.002903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.492 [2024-07-15 18:51:37.002910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.492 [2024-07-15 18:51:37.002916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.005604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.014970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.015451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.015495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.015516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.016094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.016292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.016300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.016306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.018980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.027808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.028262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.028278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.028285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.028461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.028638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.028646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.028652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.031480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.040945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.041389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.041405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.041412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.041596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.041768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.041781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.041787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.044613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.053820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.054282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.054324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.054345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.054760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.054933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.054940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.054947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.057811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.066918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.067357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.067373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.067380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.067550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.067721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.067729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.067735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.070533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.079786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.080198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.080213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.080220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.080418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.080600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.080608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.080614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.083290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.092692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.093152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.093166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.093173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.093350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.093521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.093528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.093534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.096209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.105530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.105981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.106029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.106050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.106607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.106779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.106786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.106792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.109469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.118317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.118785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.118827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.118848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.119423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.119596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.119604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.119610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.122347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.131145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.131594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.131609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.131616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.131790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.131961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.131969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.131975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.134660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.143925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.144388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.144431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.144454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.145032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.145609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.145618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.145624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.149659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.493 [2024-07-15 18:51:37.157397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.493 [2024-07-15 18:51:37.157872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-15 18:51:37.157913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.493 [2024-07-15 18:51:37.157934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.493 [2024-07-15 18:51:37.158480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.493 [2024-07-15 18:51:37.158653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.493 [2024-07-15 18:51:37.158660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.493 [2024-07-15 18:51:37.158666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.493 [2024-07-15 18:51:37.161360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.494 [2024-07-15 18:51:37.170294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.494 [2024-07-15 18:51:37.170724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-15 18:51:37.170771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.494 [2024-07-15 18:51:37.170793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.494 [2024-07-15 18:51:37.171389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.494 [2024-07-15 18:51:37.171907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.494 [2024-07-15 18:51:37.171915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.494 [2024-07-15 18:51:37.171924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.494 [2024-07-15 18:51:37.174600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.494 [2024-07-15 18:51:37.183147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.494 [2024-07-15 18:51:37.183605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-15 18:51:37.183621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.494 [2024-07-15 18:51:37.183628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.494 [2024-07-15 18:51:37.183805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.494 [2024-07-15 18:51:37.183981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.494 [2024-07-15 18:51:37.183990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.494 [2024-07-15 18:51:37.183996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.494 [2024-07-15 18:51:37.186841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.196036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.196493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.196508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.196515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.196686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.754 [2024-07-15 18:51:37.196856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.754 [2024-07-15 18:51:37.196864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.754 [2024-07-15 18:51:37.196870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.754 [2024-07-15 18:51:37.199561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.208858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.209284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.209299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.209306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.209484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.754 [2024-07-15 18:51:37.209646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.754 [2024-07-15 18:51:37.209654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.754 [2024-07-15 18:51:37.209659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.754 [2024-07-15 18:51:37.212348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.221659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.222090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.222139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.222161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.222678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.754 [2024-07-15 18:51:37.222851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.754 [2024-07-15 18:51:37.222858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.754 [2024-07-15 18:51:37.222864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.754 [2024-07-15 18:51:37.225587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.234576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.235034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.235075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.235096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.235688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.754 [2024-07-15 18:51:37.236042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.754 [2024-07-15 18:51:37.236050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.754 [2024-07-15 18:51:37.236056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.754 [2024-07-15 18:51:37.238823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.247495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.247952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.247993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.248015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.248653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.754 [2024-07-15 18:51:37.249068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.754 [2024-07-15 18:51:37.249076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.754 [2024-07-15 18:51:37.249082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.754 [2024-07-15 18:51:37.251755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.260294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.260722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.260738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.260744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.260916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.754 [2024-07-15 18:51:37.261091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.754 [2024-07-15 18:51:37.261099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.754 [2024-07-15 18:51:37.261105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.754 [2024-07-15 18:51:37.263786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.273163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.273620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.273636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.273643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.273814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.754 [2024-07-15 18:51:37.273986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.754 [2024-07-15 18:51:37.273994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.754 [2024-07-15 18:51:37.274001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.754 [2024-07-15 18:51:37.276687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.754 [2024-07-15 18:51:37.286028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.754 [2024-07-15 18:51:37.286422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.754 [2024-07-15 18:51:37.286438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.754 [2024-07-15 18:51:37.286445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.754 [2024-07-15 18:51:37.286622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.286798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.286806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.286812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.289641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.299087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.299562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.299605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.299627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.300073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.300256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.300264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.300271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.303025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.312139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.312609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.312650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.312671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.313135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.313330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.313339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.313345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.316124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.325006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.325461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.325477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.325483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.325654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.325825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.325833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.325839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.328563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.338020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.338455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.338472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.338478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.338650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.338820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.338828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.338834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.341604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.350868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.351329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.351372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.351401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.351986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.352158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.352166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.352172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.354855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.363709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.364159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.364175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.364182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.364357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.364529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.364537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.364543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.367216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.376521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.376965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.376980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.376987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.377158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.377336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.377344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.377350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.380027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.389493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.389968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.389983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.389990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.390161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.390343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.390355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.390360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.393033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.402382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.402867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.402909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.402931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.403333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.403506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.403514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.403520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.755 [2024-07-15 18:51:37.406193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.755 [2024-07-15 18:51:37.415348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.755 [2024-07-15 18:51:37.415843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.755 [2024-07-15 18:51:37.415885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.755 [2024-07-15 18:51:37.415906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.755 [2024-07-15 18:51:37.416388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.755 [2024-07-15 18:51:37.416561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.755 [2024-07-15 18:51:37.416569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.755 [2024-07-15 18:51:37.416575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.756 [2024-07-15 18:51:37.419246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.756 [2024-07-15 18:51:37.428251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.756 [2024-07-15 18:51:37.428637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.756 [2024-07-15 18:51:37.428652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.756 [2024-07-15 18:51:37.428658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.756 [2024-07-15 18:51:37.428820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.756 [2024-07-15 18:51:37.428982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.756 [2024-07-15 18:51:37.428989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.756 [2024-07-15 18:51:37.428995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.756 [2024-07-15 18:51:37.431732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.756 [2024-07-15 18:51:37.441211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.756 [2024-07-15 18:51:37.441551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.756 [2024-07-15 18:51:37.441566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.756 [2024-07-15 18:51:37.441573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.756 [2024-07-15 18:51:37.441744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.756 [2024-07-15 18:51:37.441915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.756 [2024-07-15 18:51:37.441923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.756 [2024-07-15 18:51:37.441929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.756 [2024-07-15 18:51:37.444611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.756 [2024-07-15 18:51:37.454033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.756 [2024-07-15 18:51:37.454498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.756 [2024-07-15 18:51:37.454514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:20.756 [2024-07-15 18:51:37.454520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:20.756 [2024-07-15 18:51:37.454710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:20.756 [2024-07-15 18:51:37.454887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.756 [2024-07-15 18:51:37.454895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.756 [2024-07-15 18:51:37.454901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.756 [2024-07-15 18:51:37.457745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.467118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.467590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.467654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.468256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.468663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.468671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.468677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.471352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.479947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.480399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.480442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.480463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.481051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.481577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.481585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.481591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.484269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.492771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.493222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.493242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.493249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.493421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.493593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.493600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.493606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.496310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.505693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.506132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.506147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.506154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.506331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.506503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.506511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.506517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.509189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.518639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.519120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.519136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.519142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.519319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.519491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.519500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.519510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.522261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.531526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.531941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.531957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.531964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.532134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.532310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.532319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.532325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.535006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.544549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.544999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.545015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.545022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.545193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.545390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.545399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.545407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.548242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.557558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.558028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.558070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.558093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.558667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.558840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.558848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.558854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.561664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.570589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.571083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.571126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.571149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.571553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.571730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.571740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.571747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.574553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.583737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.584111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.584126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.584133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.026 [2024-07-15 18:51:37.584311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.026 [2024-07-15 18:51:37.584483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.026 [2024-07-15 18:51:37.584490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.026 [2024-07-15 18:51:37.584496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.026 [2024-07-15 18:51:37.587175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.026 [2024-07-15 18:51:37.596714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.026 [2024-07-15 18:51:37.597137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.026 [2024-07-15 18:51:37.597154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.026 [2024-07-15 18:51:37.597162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.597340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.597512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.597520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.597526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.600204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.609696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.610178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.610193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.610200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.610382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.610554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.610562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.610569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.613269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.622570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.622986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.623002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.623009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.623180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.623363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.623371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.623377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.626147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.635682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.636125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.636141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.636148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.636331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.636509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.636517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.636523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.639356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.648878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.649358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.649374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.649381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.649557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.649733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.649741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.649750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.652580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.661943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.662398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.662414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.662421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.662598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.662773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.662781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.662787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.665619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.674986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.675451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.675467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.675474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.675651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.675827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.675835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.675841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.678667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.688066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.688553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.688569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.688576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.688758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.688941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.688949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.688955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.691842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.701260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.701716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.701771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.701793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.702291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.702546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.702557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.702566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.706633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.027 [2024-07-15 18:51:37.714933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.027 [2024-07-15 18:51:37.715321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.027 [2024-07-15 18:51:37.715337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.027 [2024-07-15 18:51:37.715344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.027 [2024-07-15 18:51:37.715527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.027 [2024-07-15 18:51:37.715699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.027 [2024-07-15 18:51:37.715707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.027 [2024-07-15 18:51:37.715713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.027 [2024-07-15 18:51:37.718559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.728068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.728514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.728530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.728537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.728712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.728890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.728898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.728905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.731609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.741196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.741519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.741535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.741542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.741713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.741887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.741895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.741901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.744580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.754044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.754422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.754464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.754485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.755061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.755239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.755248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.755254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.757929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.767012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.767408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.767425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.767433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.767605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.767777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.767786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.767792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.770505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.779991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.780365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.780409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.780431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.781015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.781187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.781195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.781202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.783936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.792936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.793270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.793286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.793293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.793474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.793636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.793643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.793649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.796315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.805989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.806409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.806425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.806432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.806608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.806793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.806801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.806807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.809551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.818982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.819374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.819416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.819437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.820014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.820252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.820260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.820266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.823007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.831894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.292 [2024-07-15 18:51:37.832274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.292 [2024-07-15 18:51:37.832291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.292 [2024-07-15 18:51:37.832301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.292 [2024-07-15 18:51:37.832472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.292 [2024-07-15 18:51:37.832643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.292 [2024-07-15 18:51:37.832651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.292 [2024-07-15 18:51:37.832657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.292 [2024-07-15 18:51:37.835383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.292 [2024-07-15 18:51:37.844860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.845195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.845211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.845218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.845394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.845565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.845573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.845579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.848259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.857732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.858120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.858136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.858142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.858320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.858492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.858500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.858505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.861180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.870663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.871101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.871116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.871123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.871300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.871472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.871483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.871489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.874164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.883592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.884001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.884017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.884024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.884195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.884372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.884381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.884387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.887126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.896538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.896947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.896962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.896969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.897140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.897317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.897325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.897331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.900069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.909366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.909807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.909822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.909829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.910000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.910171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.910179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.910185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.912868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.922326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.922744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.922759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.922766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.922938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.923110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.923118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.923124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.925807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.935360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.935704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.935720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.935727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.935898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.936070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.936078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.936084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.938823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.948320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.948696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.948736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.948757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.949350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.949828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.949836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.949842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.952523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.961176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.961654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.961669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.961676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.961850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.962022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.962029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.962036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.964718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.293 [2024-07-15 18:51:37.974077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.293 [2024-07-15 18:51:37.974535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.293 [2024-07-15 18:51:37.974550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.293 [2024-07-15 18:51:37.974557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.293 [2024-07-15 18:51:37.974728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.293 [2024-07-15 18:51:37.974899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.293 [2024-07-15 18:51:37.974906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.293 [2024-07-15 18:51:37.974912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.293 [2024-07-15 18:51:37.977598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.294 [2024-07-15 18:51:37.986975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.294 [2024-07-15 18:51:37.987404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.294 [2024-07-15 18:51:37.987420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.294 [2024-07-15 18:51:37.987426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.294 [2024-07-15 18:51:37.987588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.294 [2024-07-15 18:51:37.987749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.294 [2024-07-15 18:51:37.987756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.294 [2024-07-15 18:51:37.987762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.294 [2024-07-15 18:51:37.990521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.553 [2024-07-15 18:51:37.999986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.553 [2024-07-15 18:51:38.000488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.553 [2024-07-15 18:51:38.000530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.553 [2024-07-15 18:51:38.000551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.553 [2024-07-15 18:51:38.001127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.553 [2024-07-15 18:51:38.001676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.553 [2024-07-15 18:51:38.001701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.553 [2024-07-15 18:51:38.001711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.553 [2024-07-15 18:51:38.004531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.553 [2024-07-15 18:51:38.012808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.553 [2024-07-15 18:51:38.013261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.013277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.013283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.013445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.013607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.013614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.013619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.016301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.025668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.026037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.026052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.026058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.026246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.026417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.026424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.026430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.029106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.038587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.038930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.038945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.038951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.039113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.039298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.039306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.039312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.041997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.051493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.052007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.052047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.052069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.052600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.052772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.052780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.052786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.055634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.064521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.064878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.064893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.064900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.065071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.065246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.065254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.065260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.068002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.077564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.078037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.078078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.078100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.078693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.079102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.079110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.079116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.081842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.090427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.090894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.090936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.090958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.091553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.092040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.092047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.092053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.094727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.103291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.103756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.103797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.103818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.104202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.104393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.104401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.104407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.107085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.116103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.116590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.116632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.116654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.117041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.117213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.117221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.117232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.119904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.129029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.129440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.129455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.129461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.129633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.129804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.129812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.129817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.554 [2024-07-15 18:51:38.132516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.554 [2024-07-15 18:51:38.141989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.554 [2024-07-15 18:51:38.142459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.554 [2024-07-15 18:51:38.142475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.554 [2024-07-15 18:51:38.142481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.554 [2024-07-15 18:51:38.142652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.554 [2024-07-15 18:51:38.142826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.554 [2024-07-15 18:51:38.142833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.554 [2024-07-15 18:51:38.142839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.145530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.154905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.155328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.155370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.155391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.155871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.156033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.156040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.156046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.158732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.167745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.168170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.168185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.168192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.168379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.168550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.168557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.168563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.171241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.180562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.181014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.181032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.181039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.181210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.181386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.181394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.181400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.184076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.193509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.193998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.194040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.194062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.194491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.194663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.194671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.194677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.197322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.206338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.206767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.206781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.206787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.206950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.207111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.207118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.207124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.209813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.219146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.219639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.219680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.219702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.220067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.220248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.220256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.220262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.222938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.232041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.232479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.232495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.232501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.232673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.232844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.232851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.232858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.235549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.244859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.245313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.245328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.245335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.245497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.245659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.245666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.245671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.555 [2024-07-15 18:51:38.248360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.555 [2024-07-15 18:51:38.257926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.555 [2024-07-15 18:51:38.258388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.555 [2024-07-15 18:51:38.258404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.555 [2024-07-15 18:51:38.258411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.555 [2024-07-15 18:51:38.258588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.555 [2024-07-15 18:51:38.258764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.555 [2024-07-15 18:51:38.258771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.555 [2024-07-15 18:51:38.258778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.814 [2024-07-15 18:51:38.261614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.270902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.271383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.271411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.271418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.271580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.271741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.271748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.271754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.274444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.283873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.284339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.284355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.284361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.284535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.284697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.284705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.284711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.287400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.296698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.297131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.297146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.297152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.297339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.297510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.297518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.297524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.300202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.309848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.310232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.310248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.310258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.310435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.310610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.310618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.310624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.313454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.322888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.323356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.323400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.323422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.324001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.324576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.324584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.324591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.327368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.335972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.336465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.336507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.336529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.337106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.337575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.337583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.337589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.340266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.348828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.349216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.349235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.349242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.349413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.349584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.349596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.349603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.352304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.361668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.362156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.362198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.362220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.362725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.362896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.362904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.362910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.365597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.374599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.375097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.375139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.375160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.375655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.375827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.375834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.375840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.378518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.387459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.387909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.387924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.387931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.388092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.388275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.388283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.388289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.390999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.400316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.400782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.400822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.400843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.401436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.401681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.401689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.401695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.404371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.413222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.413678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.413692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.413699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.413860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.414022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.414029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.414035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.416722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.426119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.426597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.426613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.426620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.426791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.426962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.426970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.426976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.429656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.439118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.439570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.439611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.439633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.440217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.440808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.440831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.440837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.444894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.452624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.453007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.453022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.453029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.815 [2024-07-15 18:51:38.453201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.815 [2024-07-15 18:51:38.453377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.815 [2024-07-15 18:51:38.453386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.815 [2024-07-15 18:51:38.453392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.815 [2024-07-15 18:51:38.456120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.815 [2024-07-15 18:51:38.465500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.815 [2024-07-15 18:51:38.465901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.815 [2024-07-15 18:51:38.465916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.815 [2024-07-15 18:51:38.465924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.816 [2024-07-15 18:51:38.466096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.816 [2024-07-15 18:51:38.466272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.816 [2024-07-15 18:51:38.466280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.816 [2024-07-15 18:51:38.466287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.816 [2024-07-15 18:51:38.468965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.816 [2024-07-15 18:51:38.478424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.816 [2024-07-15 18:51:38.478889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.816 [2024-07-15 18:51:38.478931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.816 [2024-07-15 18:51:38.478952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.816 [2024-07-15 18:51:38.479547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.816 [2024-07-15 18:51:38.480026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.816 [2024-07-15 18:51:38.480034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.816 [2024-07-15 18:51:38.480043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.816 [2024-07-15 18:51:38.482733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.816 [2024-07-15 18:51:38.491258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.816 [2024-07-15 18:51:38.491739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.816 [2024-07-15 18:51:38.491780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.816 [2024-07-15 18:51:38.491802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.816 [2024-07-15 18:51:38.492205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.816 [2024-07-15 18:51:38.492397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.816 [2024-07-15 18:51:38.492405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.816 [2024-07-15 18:51:38.492411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.816 [2024-07-15 18:51:38.495088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.816 [2024-07-15 18:51:38.504070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.816 [2024-07-15 18:51:38.504465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.816 [2024-07-15 18:51:38.504481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.816 [2024-07-15 18:51:38.504487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.816 [2024-07-15 18:51:38.504658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.816 [2024-07-15 18:51:38.504834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.816 [2024-07-15 18:51:38.504842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.816 [2024-07-15 18:51:38.504848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.816 [2024-07-15 18:51:38.507548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.816 [2024-07-15 18:51:38.517117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.816 [2024-07-15 18:51:38.517585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.816 [2024-07-15 18:51:38.517601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:21.816 [2024-07-15 18:51:38.517608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:21.816 [2024-07-15 18:51:38.517784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:21.816 [2024-07-15 18:51:38.517964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.816 [2024-07-15 18:51:38.517971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.816 [2024-07-15 18:51:38.517978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.075 [2024-07-15 18:51:38.520846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.075 [2024-07-15 18:51:38.530023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.075 [2024-07-15 18:51:38.530513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-15 18:51:38.530566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.075 [2024-07-15 18:51:38.530588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.075 [2024-07-15 18:51:38.531128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.075 [2024-07-15 18:51:38.531305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.075 [2024-07-15 18:51:38.531313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.075 [2024-07-15 18:51:38.531319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.075 [2024-07-15 18:51:38.533992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.542967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.543455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.543471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.543478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.543649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.543820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.543827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.543833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.546525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.555887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.556284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.556300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.556307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.556485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.556648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.556655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.556661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.559355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.568864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.569270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.569287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.569294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.569465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.569640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.569648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.569653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.572481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.581916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.582273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.582289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.582296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.582468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.582639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.582647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.582653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.585395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.594980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.595422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.595437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.595443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.595605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.595766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.595774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.595779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.598470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.607901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.608304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.608320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.608327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.608508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.608673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.608682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.608688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.611378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.620832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.621168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.621183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.621190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.621367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.621538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.621546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.621552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.624303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.633816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.634218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.634238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.634245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.634417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.634592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.634599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.634605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.637315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.646731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.647197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.647212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.647219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.647396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.647568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.647575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.647581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.650260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.659585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.660041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.660055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.660064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.660233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.660419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.660426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.660432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.663110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.076 [2024-07-15 18:51:38.672448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.076 [2024-07-15 18:51:38.672843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-15 18:51:38.672858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.076 [2024-07-15 18:51:38.672865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.076 [2024-07-15 18:51:38.673036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.076 [2024-07-15 18:51:38.673211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.076 [2024-07-15 18:51:38.673219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.076 [2024-07-15 18:51:38.673231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.076 [2024-07-15 18:51:38.675909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.685260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.685712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.685759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.685780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.686376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.686576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.686584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.686590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.689269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.698074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.698474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.698490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.698496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.698667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.698838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.698848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.698854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.701545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.710908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.711418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.711462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.711485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.712062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.712610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.712618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.712624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.715304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.723854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.724314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.724330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.724336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.724512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.724677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.724684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.724690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.727417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.736666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.737095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.737110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.737116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.737302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.737473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.737482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.737487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.740163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.749510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.749989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.750030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.750051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.750558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.750729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.750737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.750743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.753417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.762421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.762901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.762943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.762965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.763486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.763657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.763665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.763672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.766494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.077 [2024-07-15 18:51:38.775215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.077 [2024-07-15 18:51:38.775674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-15 18:51:38.775690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.077 [2024-07-15 18:51:38.775696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.077 [2024-07-15 18:51:38.775859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.077 [2024-07-15 18:51:38.776020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.077 [2024-07-15 18:51:38.776028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.077 [2024-07-15 18:51:38.776033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.077 [2024-07-15 18:51:38.778878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.337 [2024-07-15 18:51:38.788281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.337 [2024-07-15 18:51:38.788742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.337 [2024-07-15 18:51:38.788758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.337 [2024-07-15 18:51:38.788768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.337 [2024-07-15 18:51:38.788940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.337 [2024-07-15 18:51:38.789111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.789119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.789125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.791824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.801152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.801610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.801625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.801632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.801802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.801972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.801980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.801986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.804666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.814041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.814498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.814514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.814520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.814691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.814862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.814869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.814875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.817568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.826941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.827446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.827488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.827510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.828047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.828229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.828241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.828247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.831072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.840060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.840449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.840491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.840513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.841091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.841687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.841712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.841731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.844597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.852929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.853389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.853404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.853411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.853582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.853753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.853761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.853766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.856509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.865722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.866205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.866257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.866280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.866852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.867024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.867032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.867038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.869664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.878533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.878969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.879010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.879032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.879602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.879765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.879773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.879778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.882423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.891386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.891811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.891826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.891832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.891995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.892175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.892183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.892189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.894874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.904215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.904671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.904686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.904692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.904864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.905035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.905043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.905049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.907731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.917122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.917572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.917588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.917595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.917769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.917940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.338 [2024-07-15 18:51:38.917947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.338 [2024-07-15 18:51:38.917954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.338 [2024-07-15 18:51:38.920637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.338 [2024-07-15 18:51:38.930102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.338 [2024-07-15 18:51:38.930562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.338 [2024-07-15 18:51:38.930577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.338 [2024-07-15 18:51:38.930584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.338 [2024-07-15 18:51:38.930755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.338 [2024-07-15 18:51:38.930926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:38.930933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:38.930939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:38.933629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:38.942995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:38.943483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:38.943498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:38.943505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:38.943669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:38.943831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:38.943839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:38.943847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:38.946520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:38.955921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:38.956371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:38.956387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:38.956393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:38.956555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:38.956717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:38.956725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:38.956734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:38.959429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:38.968903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:38.969394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:38.969439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:38.969460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:38.970037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:38.970291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:38.970301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:38.970309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:38.973043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:38.981729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:38.982118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:38.982133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:38.982139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:38.982315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:38.982487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:38.982495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:38.982501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:38.985175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:38.994699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:38.995126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:38.995141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:38.995148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:38.995326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:38.995498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:38.995505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:38.995511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:38.998280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:39.007839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:39.008286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:39.008341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:39.008362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:39.008939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:39.009330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:39.009339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:39.009345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:39.012089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:39.020777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:39.021282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:39.021326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:39.021347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:39.021925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:39.022445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:39.022453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:39.022459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:39.025137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.339 [2024-07-15 18:51:39.033754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.339 [2024-07-15 18:51:39.034192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.339 [2024-07-15 18:51:39.034260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.339 [2024-07-15 18:51:39.034282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.339 [2024-07-15 18:51:39.034862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.339 [2024-07-15 18:51:39.035367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.339 [2024-07-15 18:51:39.035376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.339 [2024-07-15 18:51:39.035382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.339 [2024-07-15 18:51:39.038113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.600 [2024-07-15 18:51:39.046803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.600 [2024-07-15 18:51:39.047256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-07-15 18:51:39.047272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.600 [2024-07-15 18:51:39.047279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.600 [2024-07-15 18:51:39.047470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.600 [2024-07-15 18:51:39.047651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.600 [2024-07-15 18:51:39.047660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.600 [2024-07-15 18:51:39.047667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.600 [2024-07-15 18:51:39.050528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.600 [2024-07-15 18:51:39.059704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.600 [2024-07-15 18:51:39.060109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-07-15 18:51:39.060125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.600 [2024-07-15 18:51:39.060132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.600 [2024-07-15 18:51:39.060309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.600 [2024-07-15 18:51:39.060482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.600 [2024-07-15 18:51:39.060490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.600 [2024-07-15 18:51:39.060496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.600 [2024-07-15 18:51:39.063250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.600 [2024-07-15 18:51:39.072607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.600 [2024-07-15 18:51:39.073082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-07-15 18:51:39.073125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.600 [2024-07-15 18:51:39.073146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.600 [2024-07-15 18:51:39.073738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.600 [2024-07-15 18:51:39.074338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.600 [2024-07-15 18:51:39.074346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.600 [2024-07-15 18:51:39.074352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.600 [2024-07-15 18:51:39.077090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.600 [2024-07-15 18:51:39.085757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.600 [2024-07-15 18:51:39.086244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.600 [2024-07-15 18:51:39.086287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.600 [2024-07-15 18:51:39.086309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.600 [2024-07-15 18:51:39.086887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.600 [2024-07-15 18:51:39.087431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.087440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.087446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.090215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.098844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.099267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.099310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.099331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.099909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.100440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.100449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.100455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.103197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.111814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.112273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.112288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.112295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.112466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.112638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.112645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.112651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.115331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.124797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.125237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.125252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.125259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.125431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.125606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.125614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.125620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.128378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.137804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.138284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.138327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.138356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.138710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.138882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.138890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.138896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.141646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.150727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.151206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.151258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.151280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.151665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.151837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.151845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.151851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.154548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.163566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.164065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.164105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.164126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.164660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.164833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.164841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.164847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.167528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.176509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.176893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.176909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.176916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.177087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.177264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.177276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.177282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.179960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.189436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.189877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.189919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.189940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.190451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.190623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.190631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.190637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.193400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.202442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.202859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.202874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.202881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.203051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.203223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.203235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.203241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.205973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.215235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.215705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.215747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.215768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.216358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.601 [2024-07-15 18:51:39.216587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.601 [2024-07-15 18:51:39.216594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.601 [2024-07-15 18:51:39.216601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.601 [2024-07-15 18:51:39.219293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.601 [2024-07-15 18:51:39.228109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.601 [2024-07-15 18:51:39.228510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.601 [2024-07-15 18:51:39.228552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.601 [2024-07-15 18:51:39.228574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.601 [2024-07-15 18:51:39.229151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.602 [2024-07-15 18:51:39.229741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.602 [2024-07-15 18:51:39.229767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.602 [2024-07-15 18:51:39.229787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.602 [2024-07-15 18:51:39.233876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.602 [2024-07-15 18:51:39.241666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.602 [2024-07-15 18:51:39.242120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-07-15 18:51:39.242135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.602 [2024-07-15 18:51:39.242142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.602 [2024-07-15 18:51:39.242318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.602 [2024-07-15 18:51:39.242490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.602 [2024-07-15 18:51:39.242498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.602 [2024-07-15 18:51:39.242504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.602 [2024-07-15 18:51:39.245245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.602 [2024-07-15 18:51:39.254641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.602 [2024-07-15 18:51:39.255041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-07-15 18:51:39.255056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.602 [2024-07-15 18:51:39.255063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.602 [2024-07-15 18:51:39.255240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.602 [2024-07-15 18:51:39.255411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.602 [2024-07-15 18:51:39.255419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.602 [2024-07-15 18:51:39.255425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.602 [2024-07-15 18:51:39.258168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.602 [2024-07-15 18:51:39.267540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.602 [2024-07-15 18:51:39.268062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-07-15 18:51:39.268102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.602 [2024-07-15 18:51:39.268123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.602 [2024-07-15 18:51:39.268718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.602 [2024-07-15 18:51:39.269032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.602 [2024-07-15 18:51:39.269040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.602 [2024-07-15 18:51:39.269046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.602 [2024-07-15 18:51:39.271732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.602 [2024-07-15 18:51:39.280482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.602 [2024-07-15 18:51:39.280866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-07-15 18:51:39.280882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.602 [2024-07-15 18:51:39.280889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.602 [2024-07-15 18:51:39.281061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.602 [2024-07-15 18:51:39.281236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.602 [2024-07-15 18:51:39.281244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.602 [2024-07-15 18:51:39.281250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.602 [2024-07-15 18:51:39.283972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.602 [2024-07-15 18:51:39.293485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.602 [2024-07-15 18:51:39.293864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.602 [2024-07-15 18:51:39.293880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.602 [2024-07-15 18:51:39.293887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.602 [2024-07-15 18:51:39.294059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.602 [2024-07-15 18:51:39.294234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.602 [2024-07-15 18:51:39.294243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.602 [2024-07-15 18:51:39.294249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.602 [2024-07-15 18:51:39.296980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.306684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.307197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.307249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.307272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.307759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.307931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.307938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.307947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.310678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.319683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.320149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.320191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.320213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.320811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.320983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.320990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.320997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.324877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.333296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.333718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.333733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.333741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.333917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.334093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.334100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.334107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.336938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.346395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.346812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.346863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.346884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.347478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.348039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.348047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.348053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.350795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.359381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.359769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.359811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.359833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.360436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.360987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.360995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.361001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.363742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.372282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.372669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.372684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.372691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.372861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.373032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.373039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.373045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.375784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.385240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.385722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.385763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.385784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.386188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.386363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.386372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.386378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.389047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.398056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.398467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.398482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.905 [2024-07-15 18:51:39.398489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.905 [2024-07-15 18:51:39.398664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.905 [2024-07-15 18:51:39.398834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.905 [2024-07-15 18:51:39.398842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.905 [2024-07-15 18:51:39.398848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.905 [2024-07-15 18:51:39.401526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.905 [2024-07-15 18:51:39.410949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.905 [2024-07-15 18:51:39.411367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.905 [2024-07-15 18:51:39.411394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.411401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.411563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.411724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.411731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.411737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.414421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.423796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.424155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.424169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.424176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.424365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.424536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.424544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.424550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.427221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.436808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.437272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.437314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.437335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.437914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.438360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.438368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.438377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.441052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.449709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.450186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.450240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.450264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.450841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.451413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.451421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.451427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.454100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.462513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.462964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.462980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.462986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.463157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.463335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.463343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.463348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.466020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.475364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.475852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.475895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.475918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.476471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.476644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.476652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.476658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.479332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.488248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.488736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.488785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.488806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.489271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.489444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.489452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.489458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.492128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.501034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.501488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.501503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.501510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.501681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.501851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.501859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.501865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 [2024-07-15 18:51:39.504553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 [2024-07-15 18:51:39.513998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1242005 Killed "${NVMF_APP[@]}" "$@" 00:26:22.906 [2024-07-15 18:51:39.514503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.514519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.514526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.514698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.514869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.514876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.906 [2024-07-15 18:51:39.514882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.906 [2024-07-15 18:51:39.517718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1243410 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1243410 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1243410 ']' 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.906 18:51:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.906 [2024-07-15 18:51:39.527075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.906 [2024-07-15 18:51:39.527541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.906 [2024-07-15 18:51:39.527557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.906 [2024-07-15 18:51:39.527564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.906 [2024-07-15 18:51:39.527740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.906 [2024-07-15 18:51:39.527916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.906 [2024-07-15 18:51:39.527924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.907 [2024-07-15 18:51:39.527930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.907 [2024-07-15 18:51:39.530758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.907 [2024-07-15 18:51:39.540275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.907 [2024-07-15 18:51:39.540668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.907 [2024-07-15 18:51:39.540684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.907 [2024-07-15 18:51:39.540691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.907 [2024-07-15 18:51:39.540867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.907 [2024-07-15 18:51:39.541044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.907 [2024-07-15 18:51:39.541052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.907 [2024-07-15 18:51:39.541058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.907 [2024-07-15 18:51:39.543884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.907 [2024-07-15 18:51:39.553407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.907 [2024-07-15 18:51:39.553727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.907 [2024-07-15 18:51:39.553743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.907 [2024-07-15 18:51:39.553749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.907 [2024-07-15 18:51:39.553926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.907 [2024-07-15 18:51:39.554103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.907 [2024-07-15 18:51:39.554114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.907 [2024-07-15 18:51:39.554121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.907 [2024-07-15 18:51:39.556950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.907 [2024-07-15 18:51:39.566467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.907 [2024-07-15 18:51:39.566984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.907 [2024-07-15 18:51:39.567001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.907 [2024-07-15 18:51:39.567008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.907 [2024-07-15 18:51:39.567184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.907 [2024-07-15 18:51:39.567365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.907 [2024-07-15 18:51:39.567373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.907 [2024-07-15 18:51:39.567380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.907 [2024-07-15 18:51:39.570235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.907 [2024-07-15 18:51:39.573390] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:22.907 [2024-07-15 18:51:39.573427] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.907 [2024-07-15 18:51:39.579609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.907 [2024-07-15 18:51:39.580009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.907 [2024-07-15 18:51:39.580025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.907 [2024-07-15 18:51:39.580033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.907 [2024-07-15 18:51:39.580209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.907 [2024-07-15 18:51:39.580392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.907 [2024-07-15 18:51:39.580401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.907 [2024-07-15 18:51:39.580407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.907 [2024-07-15 18:51:39.583237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.907 [2024-07-15 18:51:39.592684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.907 [2024-07-15 18:51:39.593206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.907 [2024-07-15 18:51:39.593222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.907 [2024-07-15 18:51:39.593234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.907 [2024-07-15 18:51:39.593411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.907 [2024-07-15 18:51:39.593587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.907 [2024-07-15 18:51:39.593598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.907 [2024-07-15 18:51:39.593605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.907 [2024-07-15 18:51:39.596433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.907 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.907 [2024-07-15 18:51:39.605787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.907 [2024-07-15 18:51:39.606235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.907 [2024-07-15 18:51:39.606251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:22.907 [2024-07-15 18:51:39.606259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:22.907 [2024-07-15 18:51:39.606435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:22.907 [2024-07-15 18:51:39.606612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.907 [2024-07-15 18:51:39.606620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.907 [2024-07-15 18:51:39.606627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.213 [2024-07-15 18:51:39.609455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.213 [2024-07-15 18:51:39.618967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.213 [2024-07-15 18:51:39.619367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.213 [2024-07-15 18:51:39.619383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.213 [2024-07-15 18:51:39.619391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.213 [2024-07-15 18:51:39.619568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.213 [2024-07-15 18:51:39.619745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.213 [2024-07-15 18:51:39.619753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.213 [2024-07-15 18:51:39.619759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.213 [2024-07-15 18:51:39.622593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.213 [2024-07-15 18:51:39.631880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:23.213 [2024-07-15 18:51:39.632125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.213 [2024-07-15 18:51:39.632454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.213 [2024-07-15 18:51:39.632470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.213 [2024-07-15 18:51:39.632477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.213 [2024-07-15 18:51:39.632654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.213 [2024-07-15 18:51:39.632833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.213 [2024-07-15 18:51:39.632841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.213 [2024-07-15 18:51:39.632848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.213 [2024-07-15 18:51:39.635662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.213 [2024-07-15 18:51:39.645170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.213 [2024-07-15 18:51:39.645632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.213 [2024-07-15 18:51:39.645648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.213 [2024-07-15 18:51:39.645656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.213 [2024-07-15 18:51:39.645832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.213 [2024-07-15 18:51:39.646009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.213 [2024-07-15 18:51:39.646016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.213 [2024-07-15 18:51:39.646023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.213 [2024-07-15 18:51:39.648814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.213 [2024-07-15 18:51:39.658229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.213 [2024-07-15 18:51:39.658703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.213 [2024-07-15 18:51:39.658719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.213 [2024-07-15 18:51:39.658726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.658902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.659078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.659086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.659092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.661949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.671242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.671734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.671750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.671758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.671936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.672113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.672121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.672127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.674956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.684264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.684746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.684763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.684778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.684956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.685133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.685140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.685147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.687947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.697442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.697904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.697920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.697927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.698104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.698288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.698299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.698308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.701135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.710496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.710939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.710955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.710962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.711139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.711322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.711330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.711336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.714163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.714721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.214 [2024-07-15 18:51:39.714747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.214 [2024-07-15 18:51:39.714754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.214 [2024-07-15 18:51:39.714760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.214 [2024-07-15 18:51:39.714765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.214 [2024-07-15 18:51:39.714799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.214 [2024-07-15 18:51:39.714826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.214 [2024-07-15 18:51:39.714827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.214 [2024-07-15 18:51:39.723692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.724109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.724129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.724137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.724323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.724501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.724509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.724516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.727344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.736881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.737402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.737423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.737432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.737604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.737777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.737784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.737791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.740624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.749984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.750493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.750513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.750521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.750698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.750876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.750885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.750892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.753725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.763091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.763590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.763610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.763789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.764009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.764188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.764196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.764203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.767038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.776232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.776688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.776706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.776714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.776891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.214 [2024-07-15 18:51:39.777068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.214 [2024-07-15 18:51:39.777076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.214 [2024-07-15 18:51:39.777083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.214 [2024-07-15 18:51:39.779911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.214 [2024-07-15 18:51:39.789429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.214 [2024-07-15 18:51:39.789827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.214 [2024-07-15 18:51:39.789843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.214 [2024-07-15 18:51:39.789850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.214 [2024-07-15 18:51:39.790027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.790204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.790211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.790218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.793045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.802551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.803019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.803035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.803042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.803219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.803399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.803412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.803419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.806244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.815629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.816045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.816062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.816069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.816248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.816426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.816433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.816440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.819272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.828782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.829236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.829253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.829260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.829436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.829613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.829621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.829627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.832453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.841961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.842424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.842441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.842448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.842624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.842801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.842809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.842815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.845640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.855154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.855622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.855638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.855645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.855821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.855998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.856006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.856012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.858839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.868185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.868625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.868642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.868648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.868825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.869001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.869009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.869016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.871845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.881350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.881825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.881841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.881848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.882024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.882201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.882209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.882215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.885043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.894543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.895016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.895032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.895039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.895218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.895399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.895407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.895413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.898238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.215 [2024-07-15 18:51:39.907734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.215 [2024-07-15 18:51:39.908118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.215 [2024-07-15 18:51:39.908133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.215 [2024-07-15 18:51:39.908140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.215 [2024-07-15 18:51:39.908321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.215 [2024-07-15 18:51:39.908497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.215 [2024-07-15 18:51:39.908505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.215 [2024-07-15 18:51:39.908511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.215 [2024-07-15 18:51:39.911331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.475 [2024-07-15 18:51:39.920833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.475 [2024-07-15 18:51:39.921214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.475 [2024-07-15 18:51:39.921234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.475 [2024-07-15 18:51:39.921242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.475 [2024-07-15 18:51:39.921418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.475 [2024-07-15 18:51:39.921595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.475 [2024-07-15 18:51:39.921603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.475 [2024-07-15 18:51:39.921609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.475 [2024-07-15 18:51:39.924432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.475 [2024-07-15 18:51:39.933942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.475 [2024-07-15 18:51:39.934411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.475 [2024-07-15 18:51:39.934428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.475 [2024-07-15 18:51:39.934435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.475 [2024-07-15 18:51:39.934611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.475 [2024-07-15 18:51:39.934787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.475 [2024-07-15 18:51:39.934795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.475 [2024-07-15 18:51:39.934804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.475 [2024-07-15 18:51:39.937629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.475 [2024-07-15 18:51:39.947126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.475 [2024-07-15 18:51:39.947490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.475 [2024-07-15 18:51:39.947506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.475 [2024-07-15 18:51:39.947513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.475 [2024-07-15 18:51:39.947689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.475 [2024-07-15 18:51:39.947866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.475 [2024-07-15 18:51:39.947873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.475 [2024-07-15 18:51:39.947879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.475 [2024-07-15 18:51:39.950708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.475 [2024-07-15 18:51:39.960205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.475 [2024-07-15 18:51:39.960677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.475 [2024-07-15 18:51:39.960693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.475 [2024-07-15 18:51:39.960700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.475 [2024-07-15 18:51:39.960876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.475 [2024-07-15 18:51:39.961052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.475 [2024-07-15 18:51:39.961059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.475 [2024-07-15 18:51:39.961066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.475 [2024-07-15 18:51:39.963888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.475 [2024-07-15 18:51:39.973398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.475 [2024-07-15 18:51:39.973874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.475 [2024-07-15 18:51:39.973890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.475 [2024-07-15 18:51:39.973897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.475 [2024-07-15 18:51:39.974073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.475 [2024-07-15 18:51:39.974255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.475 [2024-07-15 18:51:39.974263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.475 [2024-07-15 18:51:39.974269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.475 [2024-07-15 18:51:39.977092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.475 [2024-07-15 18:51:39.986583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.475 [2024-07-15 18:51:39.987060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.475 [2024-07-15 18:51:39.987075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.475 [2024-07-15 18:51:39.987082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.475 [2024-07-15 18:51:39.987262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.475 [2024-07-15 18:51:39.987439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.475 [2024-07-15 18:51:39.987447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:39.987454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:39.990278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:39.999782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.000248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.000265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.000272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.000448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.000625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.000632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.000639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.003467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.012968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.013408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.013424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.013431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.013608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.013783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.013791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.013797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.016622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.026550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.027080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.027146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.027168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.027596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.028003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.028029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.028051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.032164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.039687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.040163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.040180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.040187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.040369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.040546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.040554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.040560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.043393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.052724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.053190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.053206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.053213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.053396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.053574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.053582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.053589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.056412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.065763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.066119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.066135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.066142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.066323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.066504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.066513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.066519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.069344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.078845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.079318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.079335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.079342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.079518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.079694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.079702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.079708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.082535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.091882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.092358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.476 [2024-07-15 18:51:40.092374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.476 [2024-07-15 18:51:40.092382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.476 [2024-07-15 18:51:40.092558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.476 [2024-07-15 18:51:40.092735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.476 [2024-07-15 18:51:40.092743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.476 [2024-07-15 18:51:40.092750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.476 [2024-07-15 18:51:40.095573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.476 [2024-07-15 18:51:40.105071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.476 [2024-07-15 18:51:40.105528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.477 [2024-07-15 18:51:40.105544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.477 [2024-07-15 18:51:40.105551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.477 [2024-07-15 18:51:40.105727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.477 [2024-07-15 18:51:40.105904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.477 [2024-07-15 18:51:40.105912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.477 [2024-07-15 18:51:40.105919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.477 [2024-07-15 18:51:40.108745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.477 [2024-07-15 18:51:40.118255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.477 [2024-07-15 18:51:40.118725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.477 [2024-07-15 18:51:40.118743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.477 [2024-07-15 18:51:40.118750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.477 [2024-07-15 18:51:40.118927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.477 [2024-07-15 18:51:40.119105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.477 [2024-07-15 18:51:40.119112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.477 [2024-07-15 18:51:40.119118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.477 [2024-07-15 18:51:40.121944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.477 [2024-07-15 18:51:40.131441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.477 [2024-07-15 18:51:40.131910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.477 [2024-07-15 18:51:40.131926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.477 [2024-07-15 18:51:40.131933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.477 [2024-07-15 18:51:40.132109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.477 [2024-07-15 18:51:40.132290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.477 [2024-07-15 18:51:40.132299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.477 [2024-07-15 18:51:40.132306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.477 [2024-07-15 18:51:40.135127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.477 [2024-07-15 18:51:40.144636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.477 [2024-07-15 18:51:40.145108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.477 [2024-07-15 18:51:40.145123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.477 [2024-07-15 18:51:40.145130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.477 [2024-07-15 18:51:40.145311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.477 [2024-07-15 18:51:40.145491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.477 [2024-07-15 18:51:40.145499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.477 [2024-07-15 18:51:40.145505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.477 [2024-07-15 18:51:40.148328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.477 [2024-07-15 18:51:40.157823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.477 [2024-07-15 18:51:40.158296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.477 [2024-07-15 18:51:40.158312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.477 [2024-07-15 18:51:40.158319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.477 [2024-07-15 18:51:40.158496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.477 [2024-07-15 18:51:40.158676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.477 [2024-07-15 18:51:40.158684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.477 [2024-07-15 18:51:40.158690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.477 [2024-07-15 18:51:40.161524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.477 [2024-07-15 18:51:40.170854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.477 [2024-07-15 18:51:40.171324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.477 [2024-07-15 18:51:40.171339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.477 [2024-07-15 18:51:40.171346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.477 [2024-07-15 18:51:40.171522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.477 [2024-07-15 18:51:40.171699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.477 [2024-07-15 18:51:40.171706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.477 [2024-07-15 18:51:40.171713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.477 [2024-07-15 18:51:40.174538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.737 [2024-07-15 18:51:40.184047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.737 [2024-07-15 18:51:40.184519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.737 [2024-07-15 18:51:40.184535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.737 [2024-07-15 18:51:40.184542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.737 [2024-07-15 18:51:40.184718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.737 [2024-07-15 18:51:40.184893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.737 [2024-07-15 18:51:40.184901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.737 [2024-07-15 18:51:40.184907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.737 [2024-07-15 18:51:40.187728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.737 [2024-07-15 18:51:40.197223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.737 [2024-07-15 18:51:40.197676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.737 [2024-07-15 18:51:40.197692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.737 [2024-07-15 18:51:40.197699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.737 [2024-07-15 18:51:40.197875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.737 [2024-07-15 18:51:40.198053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.737 [2024-07-15 18:51:40.198061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.737 [2024-07-15 18:51:40.198067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.737 [2024-07-15 18:51:40.200892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.737 [2024-07-15 18:51:40.210407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.737 [2024-07-15 18:51:40.210800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.210816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.210823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.210999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.211175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.211183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.211189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.214013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.223523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.223924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.223940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.223947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.224123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.224303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.224312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.224318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.227140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.236642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.237002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.237018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.237025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.237202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.237381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.237389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.237396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.240218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.249727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.250116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.250132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.250143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.250324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.250501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.250509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.250515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.253338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.262845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.263315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.263331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.263338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.263516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.263692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.263701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.263708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.266530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.276038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.276512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.276528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.276535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.276712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.276889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.276897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.276903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.279736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.289071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.289540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.289556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.289563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.289739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.289916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.289928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.289935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.292762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.302102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.302576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.302592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.302599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.302776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.302953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.302962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.302969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.305794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.315299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.315641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.315657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.315665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.315842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.316019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.316027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.316034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.318860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.328379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.328848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.328864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.328871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.329047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.329229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.329238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.329244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.332068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.341412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.738 [2024-07-15 18:51:40.341861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.738 [2024-07-15 18:51:40.341876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.738 [2024-07-15 18:51:40.341883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.738 [2024-07-15 18:51:40.342059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.738 [2024-07-15 18:51:40.342240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.738 [2024-07-15 18:51:40.342249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.738 [2024-07-15 18:51:40.342255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.738 [2024-07-15 18:51:40.345081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.738 [2024-07-15 18:51:40.354597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.739 [2024-07-15 18:51:40.355072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.739 [2024-07-15 18:51:40.355088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.739 [2024-07-15 18:51:40.355095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.739 [2024-07-15 18:51:40.355276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.739 [2024-07-15 18:51:40.355453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.739 [2024-07-15 18:51:40.355461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.739 [2024-07-15 18:51:40.355467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.739 [2024-07-15 18:51:40.358297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.739 [2024-07-15 18:51:40.367640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.739 [2024-07-15 18:51:40.368108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.739 [2024-07-15 18:51:40.368124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.739 [2024-07-15 18:51:40.368131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.739 [2024-07-15 18:51:40.368311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.739 [2024-07-15 18:51:40.368487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.739 [2024-07-15 18:51:40.368495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.739 [2024-07-15 18:51:40.368501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.739 [2024-07-15 18:51:40.371329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.739 [2024-07-15 18:51:40.380679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.739 [2024-07-15 18:51:40.381052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.739 [2024-07-15 18:51:40.381068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.739 [2024-07-15 18:51:40.381074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.739 [2024-07-15 18:51:40.381260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.739 [2024-07-15 18:51:40.381437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.739 [2024-07-15 18:51:40.381445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.739 [2024-07-15 18:51:40.381451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.739 [2024-07-15 18:51:40.384279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.739 [2024-07-15 18:51:40.393789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.739 [2024-07-15 18:51:40.394122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.739 [2024-07-15 18:51:40.394138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.739 [2024-07-15 18:51:40.394145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.739 [2024-07-15 18:51:40.394326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.739 [2024-07-15 18:51:40.394508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.739 [2024-07-15 18:51:40.394516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.739 [2024-07-15 18:51:40.394523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.739 [2024-07-15 18:51:40.397351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.739 [2024-07-15 18:51:40.406868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.739 [2024-07-15 18:51:40.407154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.739 [2024-07-15 18:51:40.407170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.739 [2024-07-15 18:51:40.407177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.739 [2024-07-15 18:51:40.407359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.739 [2024-07-15 18:51:40.407536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.739 [2024-07-15 18:51:40.407544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.739 [2024-07-15 18:51:40.407550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.739 [2024-07-15 18:51:40.410382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.739 [2024-07-15 18:51:40.420063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.739 [2024-07-15 18:51:40.420439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.739 [2024-07-15 18:51:40.420456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.739 [2024-07-15 18:51:40.420462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.739 [2024-07-15 18:51:40.420642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.739 [2024-07-15 18:51:40.420820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.739 [2024-07-15 18:51:40.420829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.739 [2024-07-15 18:51:40.420836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.739 [2024-07-15 18:51:40.423667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.739 [2024-07-15 18:51:40.427436] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.739 [2024-07-15 18:51:40.433204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.739 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.739 [2024-07-15 18:51:40.433522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.739 [2024-07-15 18:51:40.433538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.739 [2024-07-15 18:51:40.433545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.739 [2024-07-15 18:51:40.433722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.739 [2024-07-15 18:51:40.433900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.739 [2024-07-15 18:51:40.433908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.739 [2024-07-15 18:51:40.433914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.739 [2024-07-15 18:51:40.436746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.999 [2024-07-15 18:51:40.446262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.999 [2024-07-15 18:51:40.446575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-07-15 18:51:40.446590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.999 [2024-07-15 18:51:40.446597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.999 [2024-07-15 18:51:40.446774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.999 [2024-07-15 18:51:40.446951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.999 [2024-07-15 18:51:40.446959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.999 [2024-07-15 18:51:40.446965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.999 [2024-07-15 18:51:40.449799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.999 [2024-07-15 18:51:40.459322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.999 [2024-07-15 18:51:40.459770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-07-15 18:51:40.459785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.999 [2024-07-15 18:51:40.459793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.999 [2024-07-15 18:51:40.459969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.999 [2024-07-15 18:51:40.460147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.999 [2024-07-15 18:51:40.460156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.999 [2024-07-15 18:51:40.460162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.999 [2024-07-15 18:51:40.463001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.999 Malloc0 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.999 [2024-07-15 18:51:40.472530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.999 [2024-07-15 18:51:40.472830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-07-15 18:51:40.472846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.999 [2024-07-15 18:51:40.472853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.999 [2024-07-15 18:51:40.473030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.999 [2024-07-15 18:51:40.473207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.999 [2024-07-15 18:51:40.473215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.999 [2024-07-15 18:51:40.473221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.999 [2024-07-15 18:51:40.476049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.999 [2024-07-15 18:51:40.485567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.999 [2024-07-15 18:51:40.485905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.999 [2024-07-15 18:51:40.485921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a980 with addr=10.0.0.2, port=4420 00:26:23.999 [2024-07-15 18:51:40.485928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146a980 is same with the state(5) to be set 00:26:23.999 [2024-07-15 18:51:40.486103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a980 (9): Bad file descriptor 00:26:23.999 [2024-07-15 18:51:40.486284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.999 [2024-07-15 18:51:40.486294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.999 [2024-07-15 18:51:40.486303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.999 [2024-07-15 18:51:40.489134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.999 [2024-07-15 18:51:40.492795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.999 18:51:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1242412 00:26:23.999 [2024-07-15 18:51:40.498646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.999 [2024-07-15 18:51:40.535051] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:33.971 00:26:33.971 Latency(us) 00:26:33.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.971 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:33.971 Verification LBA range: start 0x0 length 0x4000 00:26:33.971 Nvme1n1 : 15.01 8112.67 31.69 12669.12 0.00 6139.21 680.29 15500.69 00:26:33.971 =================================================================================================================== 00:26:33.971 Total : 8112.67 31.69 12669.12 0.00 6139.21 680.29 15500.69 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:33.971 rmmod nvme_tcp 00:26:33.971 rmmod nvme_fabrics 00:26:33.971 rmmod nvme_keyring 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1243410 ']' 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1243410 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1243410 ']' 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1243410 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243410 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243410' 00:26:33.971 killing process with pid 1243410 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1243410 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1243410 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.971 18:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.908 18:51:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.908 00:26:34.908 real 0m26.011s 00:26:34.908 user 1m2.341s 00:26:34.908 sys 0m6.220s 00:26:34.908 18:51:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.908 18:51:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.908 ************************************ 00:26:34.908 END TEST nvmf_bdevperf 00:26:34.908 ************************************ 00:26:34.908 18:51:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:34.908 18:51:51 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:34.908 18:51:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:34.908 18:51:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.908 18:51:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.908 ************************************ 00:26:34.908 START TEST nvmf_target_disconnect 00:26:34.908 ************************************ 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:34.908 * Looking for test storage... 00:26:34.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.908 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:35.168 18:51:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.441 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:40.442 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:40.442 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:40.442 Found net devices under 0000:86:00.0: cvl_0_0 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:40.442 Found net devices under 0000:86:00.1: cvl_0_1 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:26:40.442 00:26:40.442 --- 10.0.0.2 ping statistics --- 00:26:40.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.442 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:26:40.442 00:26:40.442 --- 10.0.0.1 ping statistics --- 00:26:40.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.442 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.442 18:51:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:40.442 ************************************ 00:26:40.442 START TEST nvmf_target_disconnect_tc1 00:26:40.442 ************************************ 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.442 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.442 [2024-07-15 18:51:57.114050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.442 [2024-07-15 18:51:57.114092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d5e60 with addr=10.0.0.2, port=4420 00:26:40.442 [2024-07-15 18:51:57.114111] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:40.442 [2024-07-15 18:51:57.114120] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:40.442 [2024-07-15 18:51:57.114126] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:40.442 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:40.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:40.442 Initializing NVMe Controllers 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:40.442 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:40.442 00:26:40.442 real 0m0.101s 00:26:40.442 user 0m0.040s 00:26:40.442 sys 0m0.057s 00:26:40.443 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.443 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:40.443 ************************************ 00:26:40.443 END TEST nvmf_target_disconnect_tc1 00:26:40.443 ************************************ 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:40.702 ************************************ 00:26:40.702 START TEST nvmf_target_disconnect_tc2 00:26:40.702 ************************************ 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1248362 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1248362 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1248362 ']' 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.702 18:51:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.702 [2024-07-15 18:51:57.248600] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:40.702 [2024-07-15 18:51:57.248636] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.702 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.702 [2024-07-15 18:51:57.317383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.702 [2024-07-15 18:51:57.396411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.702 [2024-07-15 18:51:57.396448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.702 [2024-07-15 18:51:57.396455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.702 [2024-07-15 18:51:57.396462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.702 [2024-07-15 18:51:57.396467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.702 [2024-07-15 18:51:57.396591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:40.702 [2024-07-15 18:51:57.396702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:40.702 [2024-07-15 18:51:57.396807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:40.702 [2024-07-15 18:51:57.396808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.639 Malloc0 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.639 [2024-07-15 18:51:58.107677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.639 [2024-07-15 18:51:58.132701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1248606 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:41.639 18:51:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.639 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.552 18:52:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1248362 00:26:43.552 18:52:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 [2024-07-15 18:52:00.159383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 [2024-07-15 18:52:00.159590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Read completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 Write completed with error (sct=0, sc=8) 00:26:43.552 starting I/O failed 00:26:43.552 [2024-07-15 18:52:00.159785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:43.552 [2024-07-15 18:52:00.160085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.552 [2024-07-15 18:52:00.160101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.552 qpair failed and we were unable to recover it. 00:26:43.552 [2024-07-15 18:52:00.160335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.160346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.160581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.160591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.160852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.160870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.161076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.161087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.161278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.161289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.161460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.161470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.161581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.161591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.161784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.161794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.161959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.161969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.162102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.162112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.162234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.162246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.162359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.162369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.162570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.162580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.162815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.162825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.162956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.162966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.163049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.163058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.163190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.163201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.163317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.163327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.163423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.163432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.163617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.163627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.163731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.163741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.163866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.163876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.164077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.164087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.164182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.164191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.164426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.164437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.164600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.164609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.164734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.164745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.165000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.165010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.165268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.165278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.165508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.165520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.165682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.165692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.165822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.165832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.166013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.166023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.166152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.166163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.166332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.166342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.166532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.166645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.166655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.166742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.166751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.166866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.166876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.167134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.167144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.553 qpair failed and we were unable to recover it. 00:26:43.553 [2024-07-15 18:52:00.167375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.553 [2024-07-15 18:52:00.167385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.167585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.167595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.167773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.167783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.167989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.168019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.168236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.168283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.168423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.168452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.168662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.168692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.168857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.168886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.169092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.169102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.169290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.169301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.169410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.169419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.169598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.169608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.169788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.169798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.169983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.170013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.170219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.170257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.170418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.170447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.170593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.170623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.170783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.170812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.171033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.171063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.171266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.171280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.171480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.171494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.171585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.171598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.171780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.171793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.172018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.172047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.172201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.172238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.172466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.172496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.172707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.172736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.172939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.172969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.173190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.173221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.173439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.173468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.173642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.173711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.173948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.173958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.174139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.174149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.174260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.174271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.174458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.174468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.174648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.174658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.174783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.174794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.174953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.174963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.175072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.175082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.175266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.175277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.175456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.554 [2024-07-15 18:52:00.175486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.554 qpair failed and we were unable to recover it. 00:26:43.554 [2024-07-15 18:52:00.175694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.175724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.175924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.175934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.176038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.176061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.176231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.176241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.176418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.176428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.176617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.176627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.176730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.176739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.176917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.176927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.177038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.177048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.177249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.177259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.177433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.177443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.177607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.177617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.177813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.177822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.177947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.177957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.178143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.178154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.178420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.178430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.178546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.178556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.178667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.178677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.178845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.178855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.179095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.179124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.179349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.179380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.179602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.179637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.179801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.179810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.180053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.180062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.180186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.180195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.180306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.180315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.180486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.180496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.180609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.180619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.180753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.180763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.180879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.180889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.181115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.181125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.181300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.181310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.181540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.181550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.181658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.181668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.181863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.181892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.182106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.182135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.182285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.182316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.182609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.182638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.182901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.182911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.183168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.183178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.555 [2024-07-15 18:52:00.183284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.555 [2024-07-15 18:52:00.183294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.555 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.183468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.183478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.183587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.183597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.183800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.183810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.183909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.183918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.184168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.184178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.184303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.184314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.184422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.184432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.184542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.184552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.184660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.184670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.184797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.184807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.185080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.185089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.185210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.185220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.185499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.185509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.185604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.185613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.185782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.185791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.185926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.185936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.186167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.186177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.186416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.186426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.186611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.186640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.186855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.186885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.187178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.187208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.187486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.187516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.187717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.187746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.188000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.188010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.188262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.188272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.188444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.188454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.188639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.188648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.188768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.188778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.188960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.188972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.189088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.189097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.189258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.189269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.556 qpair failed and we were unable to recover it. 00:26:43.556 [2024-07-15 18:52:00.189372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.556 [2024-07-15 18:52:00.189382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.189502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.189512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.189692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.189702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.189874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.189884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.189989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.189999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.190159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.190169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.190329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.190339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.190517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.190526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.190781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.190791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.190951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.190961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.191147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.191176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.191351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.191383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.191603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.191632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.191847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.191857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.192027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.192037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.192215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.192233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.192416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.192426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.192676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.192685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.192867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.192877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.192989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.192998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.193195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.193205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.193336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.193346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.193598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.193608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.193843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.193872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.194053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.194082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.194299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.194329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.194539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.194569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.194804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.194813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.195008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.195018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.195122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.195132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.195296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.195306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.195491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.195501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.195742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.195771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.195974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.196004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.196149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.196178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.196348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.196378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.196610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.196640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.196882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.196916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.197218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.197258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.197464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.197494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.197655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.557 [2024-07-15 18:52:00.197684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.557 qpair failed and we were unable to recover it. 00:26:43.557 [2024-07-15 18:52:00.197897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.197927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.198211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.198221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.198477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.198487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.198616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.198626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.198736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.198746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.198865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.198876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.199067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.199077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.199249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.199259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.199446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.199475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.199630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.199659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.199823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.199853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.200068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.200078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.200244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.200254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.200375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.200385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.200481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.200490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.200681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.200691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.200799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.200809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.200915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.200925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.201196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.201206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.201436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.201446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.201542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.201551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.201779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.201810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.202047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.202291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.202321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.202488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.202517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.202725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.202755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.202931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.202960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.203166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.203195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.203410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.203441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.203641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.203670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.203803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.203832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.204051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.204086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.204178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.204188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.204435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.204445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.204682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.204691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.204916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.204926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.205095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.205106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.205208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.205220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.205356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.205366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.205538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.205548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.558 [2024-07-15 18:52:00.205664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.558 [2024-07-15 18:52:00.205674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.558 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.205863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.205873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.206105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.206114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.206369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.206379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.206452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.206461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.206575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.206584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.206713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.206723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.206840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.206851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.206947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.206956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.207123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.207133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.207265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.207276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.207443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.207453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.207707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.207717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.207879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.207889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.208086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.208115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.208341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.208373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.208586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.208616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.208831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.208842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.209069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.209079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.209304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.209314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.209483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.209493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.209666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.209695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.209854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.209883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.210112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.210142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.210351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.210361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.210631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.210641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.210740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.210749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.210910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.210920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.211124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.211153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.211371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.211401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.211558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.211589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.211858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.211887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.212093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.212122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.212345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.212355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.212530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.212540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.212747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.212757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.212880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.212891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.213052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.213063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.213222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.213235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.213495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.213524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.213758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.559 [2024-07-15 18:52:00.213788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.559 qpair failed and we were unable to recover it. 00:26:43.559 [2024-07-15 18:52:00.214006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.214036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.214255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.214285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.214624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.214653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.214921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.214951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.215184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.215194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.215461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.215471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.215550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.215559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.215674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.215686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.215924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.215935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.216133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.216143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.216249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.216258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.216424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.216433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.216554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.216564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.216801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.216810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.216973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.216983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.217190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.217199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.217384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.217394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.217577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.217587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.217678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.217687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.217795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.217805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.218000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.218030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.218270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.218300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.218468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.218497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.218648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.218677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.218887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.218917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.219208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.219247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.219411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.219441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.219588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.219617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.219787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.219817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.220030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.220040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.220219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.220233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.220473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.220502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.220714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.220743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.220893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.220922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.560 [2024-07-15 18:52:00.221121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.560 [2024-07-15 18:52:00.221131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.560 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.221407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.221420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.221614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.221624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.221865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.221894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.222109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.222138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.222356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.222386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.222606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.222635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.222844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.222874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.223111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.223141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.223347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.223358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.223461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.223470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.223614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.223624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.223732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.223742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.223932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.223942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.224111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.224120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.224238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.224248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.224343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.224353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.224524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.224534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.224662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.224671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.224936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.224965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.225184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.225214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.225397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.225427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.225640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.225669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.225960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.225989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.226203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.226242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.226406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.226416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.226614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.226643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.226795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.226825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.227045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.227075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.227232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.227242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.227473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.227483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.227654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.227664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.227828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.227848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.228054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.228064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.228317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.228348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.228628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.228657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.228903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.228933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.229091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.229101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.229337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.229367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.229571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.229600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.229822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.561 [2024-07-15 18:52:00.229851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.561 qpair failed and we were unable to recover it. 00:26:43.561 [2024-07-15 18:52:00.229997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.230009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.230207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.230245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.230461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.230490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.230707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.230736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.230874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.230903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.231208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.231244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.231407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.231436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.231658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.231688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.231981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.232011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.232168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.232197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.232351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.232361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.232642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.232671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.232827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.232862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.233089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.233099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.233264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.233274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.233451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.233461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.233559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.233568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.233697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.233707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.233868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.233878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.233991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.234000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.234176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.234186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.234282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.234293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.234459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.234469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.234635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.234645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.234758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.234768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.234942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.234952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.235062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.235071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.235187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.235197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.235382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.235393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.235576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.235606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.235823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.235852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.236151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.236180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.236343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.236373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.236612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.236642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.236869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.236898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.237130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.237160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.237428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.237438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.237646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.237656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.237854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.237863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.237972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.237982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.562 qpair failed and we were unable to recover it. 00:26:43.562 [2024-07-15 18:52:00.238073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.562 [2024-07-15 18:52:00.238084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.238246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.238257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.238427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.238437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.238613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.238633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.238746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.238755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.238849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.238862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.238978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.238988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.239084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.239093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.239273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.239283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.239361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.239370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.239562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.239591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.239750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.239779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.239998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.240027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.240236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.240246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.240385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.240396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.240561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.240570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.240766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.240776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.240941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.240952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.241184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.241213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.241453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.241483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.241644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.241673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.241962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.241991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.242214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.242254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.242358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.242368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.242434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.242443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.242567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.242576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.242749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.242759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.243031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.243061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.243284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.243315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.243528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.243558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.243843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.243872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.244075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.244104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.244321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.244351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.244554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.244583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.244787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.244816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.244976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.245006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.245143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.245153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.245259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.245268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.245544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.245553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.245715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.245725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.563 [2024-07-15 18:52:00.245979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.563 [2024-07-15 18:52:00.245990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.563 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.246174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.246184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.246370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.246400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.246569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.246598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.246847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.246877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.247069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.247105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.247302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.247313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.247427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.247437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.247601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.247611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.247712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.247721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.247831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.247840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.248007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.248017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.248180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.248189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.248932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.248972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.249287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.249299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.249561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.249571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.249656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.249665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.249825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.249835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.250014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.250056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.250205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.250246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.250481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.250511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.250672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.250702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.251026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.251057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.251255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.251265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.251437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.251447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.251576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.251587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.564 [2024-07-15 18:52:00.251777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.564 [2024-07-15 18:52:00.251787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.564 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.251887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.251897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.252004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.252016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.252148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.252158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.252285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.252295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.252397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.252406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.252650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.252660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.252762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.252773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.252883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.252893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.253140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.253150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.253333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.253343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.253530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.253541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.253754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.253783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.254066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.254100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.254256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.254293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.254419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.254448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.254659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.254688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.254847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.254887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.255964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.255974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.256075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.256084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.256331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.256341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.256450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.256460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.256642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.256652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.256778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.256788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.256881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.256890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.257055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.257065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.257246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.257256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.257357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.257366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.257627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.257656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.257935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.257965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.258175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.258204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.258424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.258455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.258661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.849 [2024-07-15 18:52:00.258691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.849 qpair failed and we were unable to recover it. 00:26:43.849 [2024-07-15 18:52:00.258842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.258870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.259208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.259287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.259492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.259507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.259640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.259654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.259898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.259928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.260081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.260111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.260312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.260344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.260494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.260524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.260794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.260824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.260976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.261006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.261218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.261256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.261401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.261415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.261515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.261529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.261648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.261662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.261829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.261847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.262021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.262034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.262222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.262260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.262467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.262496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.262664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.262694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.262853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.262882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.263021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.263051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.263208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.263247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.263481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.263494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.263728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.263742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.263847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.263860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.263983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.263997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.264166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.264180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.264416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.264430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.264636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.264666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.264825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.264855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.265019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.265049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.265182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.265196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.265427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.265458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.265756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.265786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.265943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.265973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.266184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.266197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.266404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.266418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.266661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.266690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.266911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.266941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.267144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.850 [2024-07-15 18:52:00.267174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.850 qpair failed and we were unable to recover it. 00:26:43.850 [2024-07-15 18:52:00.267397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.267427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Write completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Write completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Write completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Write completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Write completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Write completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 Read completed with error (sct=0, sc=8) 00:26:43.851 starting I/O failed 00:26:43.851 [2024-07-15 18:52:00.268055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:43.851 [2024-07-15 18:52:00.268285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.268320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.268429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.268441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.268667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.268678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.268836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.268846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.268938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.268947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.269145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.269155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.269271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.269281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.269447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.269457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.269614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.269640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.269862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.269892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.270048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.270077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.270281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.270291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.270461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.270471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.270577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.270587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.270790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.270800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.271055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.271065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.271196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.271206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.271379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.271389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.271611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.271621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.271783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.271793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.271923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.271933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.272086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.272116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.272341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.272386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.272682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.272712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.272866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.272896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.273132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.273161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.273382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.273392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.273496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.851 [2024-07-15 18:52:00.273508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.851 qpair failed and we were unable to recover it. 00:26:43.851 [2024-07-15 18:52:00.273594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.273603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.273766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.273776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.273952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.273962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.274195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.274205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.274385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.274395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.274579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.274614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.274906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.274935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.275092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.275122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.275361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.275371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.275566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.275575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.275700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.275710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.275871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.275882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.276082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.276092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.276202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.276212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.276389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.276400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.276511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.276522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.276638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.276649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.276845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.276855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.276974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.276984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.277160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.277170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.277252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.277262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.277370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.277379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.277505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.277515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.277616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.277625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.277739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.277750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.277913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.277923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.278092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.278102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.278306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.278316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.278554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.278584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.278800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.278830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.279069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.279098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.279262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.279293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.279661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.279711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.279931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.279962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.280191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.280221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.280402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.280435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.280679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.280709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.280848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.280887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.281186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.281217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.281480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.281511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.852 qpair failed and we were unable to recover it. 00:26:43.852 [2024-07-15 18:52:00.281724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.852 [2024-07-15 18:52:00.281755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.281894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.281924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.282096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.282125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.282418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.282450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.282657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.282687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.282962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.283000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.283266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.283298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.283517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.283546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.283760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.283790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.284026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.284057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.284280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.284311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.284472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.284486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.284615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.284629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.284817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.284830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.285062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.285076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.285247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.285261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.285508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.285537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.285695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.285724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.285878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.285907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.286111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.286125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.286301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.286316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.286514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.286544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.286700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.286729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.286898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.286928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.287073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.287087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.287358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.287389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.287661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.287690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.287898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.287927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.288085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.288099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.288277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.288291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.288420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.288434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.288657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.288687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.288898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.288928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.853 qpair failed and we were unable to recover it. 00:26:43.853 [2024-07-15 18:52:00.289095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.853 [2024-07-15 18:52:00.289125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.289305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.289320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.289435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.289449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.289638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.289652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.289849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.289863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.290049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.290063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.290174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.290188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.290397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.290411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.290604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.290618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.290738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.290752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.290865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.290878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.291054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.291068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.291189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.291209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.291406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.291421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.291594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.291607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.291746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.291760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.291885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.291899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.292090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.292104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.292294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.292308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.292484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.292498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.292625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.292639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.292875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.292889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.293093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.293107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.293284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.293299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.293402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.293416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.293539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.293554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.293762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.293776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.293954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.293968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.294100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.294113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.294234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.294248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.294433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.294447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.294562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.294577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.294738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.294752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.294988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.295002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.295120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.295134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.295258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.295273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.295512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.295525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.295652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.295667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.295782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.295796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.295994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.296027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.296254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.854 [2024-07-15 18:52:00.296281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.854 qpair failed and we were unable to recover it. 00:26:43.854 [2024-07-15 18:52:00.296473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.296485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.296686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.296717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.296925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.296956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.297159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.297189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.297316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.297326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.297444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.297454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.297641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.297651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.297862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.297872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.297984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.297994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.298161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.298172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.298275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.298286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.298455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.298469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.298672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.298682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.298775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.298784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.298883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.298892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.299071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.299081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.299269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.299460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.299471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.299652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.299681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.299824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.299853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.300152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.300181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.300408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.300418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.300550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.300560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.300679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.300689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.300788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.300797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.300911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.300922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.301108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.301118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.301221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.301235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.301327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.301336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.301507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.301517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.301672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.301703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.301847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.301876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.302010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.302039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.302182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.302212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.302438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.302448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.302679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.302689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.302873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.302903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.303050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.303078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.303245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.303285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.303447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.303461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.855 [2024-07-15 18:52:00.303669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.855 [2024-07-15 18:52:00.303699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.855 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.303906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.303937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.304097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.304127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.304349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.304363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.304472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.304486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.304601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.304616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.304802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.304816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.304928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.304943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.305049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.305064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.305235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.305250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.305440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.305454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.305559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.305583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.305703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.305717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.305893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.305906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.306079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.306092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.306216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.306234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.306355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.306369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.306487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.306502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.306609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.306622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.306734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.306747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.306935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.306949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.307057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.307070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.307261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.307275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.307394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.307408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.307592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.307606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.307783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.307797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.308089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.308119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.308341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.308372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.308578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.308592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.308714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.308747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.308900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.308930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.309145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.309175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.309392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.309406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.309512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.309525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.309636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.309649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.309855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.309868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.310069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.310082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.310316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.310330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.310454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.310468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.310683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.310716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.856 [2024-07-15 18:52:00.310941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.856 [2024-07-15 18:52:00.310970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.856 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.311200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.311214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.311457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.311472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.311644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.311657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.311842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.311856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.311990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.312020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.312155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.312185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.312414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.312459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.312564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.312577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.312708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.312722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.312827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.312841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.313018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.313035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.313157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.313171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.313351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.313365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.313536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.313550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.313735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.313764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.313964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.313993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.314203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.314240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.314388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.314417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.314654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.314669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.314869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.314883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.315054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.315070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.315246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.315278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.315440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.315469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.315626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.315656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.315798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.315828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.316051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.316081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.316304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.316318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.316421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.316435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.316545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.316559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.316684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.316698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.316872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.316886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.317065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.317095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.317306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.317337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.317487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.317516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.317743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.857 [2024-07-15 18:52:00.317773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.857 qpair failed and we were unable to recover it. 00:26:43.857 [2024-07-15 18:52:00.318050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.318079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.318370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.318400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.318628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.318663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.318827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.318857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.319130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.319159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.319309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.319340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.319541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.319570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.319722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.319752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.319965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.319994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.320213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.320253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.320472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.320485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.320667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.320681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.320793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.320807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.320922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.320935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.321112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.321126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.321254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.321269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.321441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.321455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.321637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.321650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.321754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.321767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.321896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.321909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.322077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.322091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.322272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.322286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.322404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.322418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.322520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.322534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.322662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.322675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.322776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.322790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.322978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.322993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.323230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.323244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.323361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.323375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.323546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.323560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.323740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.323756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.323950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.323963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.324164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.324178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.324306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.324321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.324488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.324501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.324606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.324620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.324726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.324740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.324926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.324956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.325181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.325210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.325447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.325486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.325675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.858 [2024-07-15 18:52:00.325689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.858 qpair failed and we were unable to recover it. 00:26:43.858 [2024-07-15 18:52:00.325791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.325805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.325994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.326028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.326247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.326279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.326491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.326520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.326627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.326657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.326869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.326897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.327111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.327140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.327274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.327288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.327412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.327427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.327546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.327559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.327736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.327751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.327866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.327879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.328046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.328060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.328236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.328250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.328355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.328368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.328447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.328460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.328573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.328587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.328696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.328710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.328977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.328992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.329114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.329128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.329235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.329249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.329337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.329349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.329514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.329528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.329631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.329644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.329761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.329775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.329956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.329969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.330143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.330157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.330332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.330346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.330438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.330451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.330577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.330592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.330806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.330836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.331054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.331084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.331356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.331387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.331536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.331565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.331775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.331805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.331955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.331984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.332273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.332287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.332392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.332405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.332573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.332587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.332714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.332728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.859 [2024-07-15 18:52:00.332840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.859 [2024-07-15 18:52:00.332854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.859 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.332968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.332984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.333089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.333103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.333208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.333222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.333394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.333407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.333585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.333599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.333712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.333726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.333853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.333866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.333971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.333986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.334100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.334115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.334375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.334390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.334557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.334571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.334691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.334706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.334819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.334832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.335003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.335017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.335192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.335205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.335314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.335328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.335456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.335469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.335583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.335597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.335766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.335780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.336029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.336059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.336205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.336263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.336411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.336441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.336635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.336649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.336835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.336849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.337960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.337974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.338082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.338095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.338270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.338284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.338468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.338482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.338650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.338664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.338839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.338853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.338994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.339024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.339168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.339198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.339346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.860 [2024-07-15 18:52:00.339377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.860 qpair failed and we were unable to recover it. 00:26:43.860 [2024-07-15 18:52:00.339515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.339532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.339666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.339680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.339876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.339890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.339994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.340008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.340111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.340125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.340323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.340354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.340492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.340522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.340738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.340768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.340906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.340935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.341143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.341173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.341390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.341428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.341538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.341567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.341797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.341827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.342029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.342059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.342255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.342273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.342402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.342416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.342541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.342554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.342736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.342750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.342917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.342934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.343172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.343186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.343298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.343312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.343479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.343492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.343661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.343674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.343775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.343788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.343954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.343967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.344089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.344102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.344272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.344286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.344490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.344520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.344653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.344682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.344899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.344928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.345141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.345171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.345393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.345424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.345636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.345666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.345798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.345827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.345958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.345987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.346218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.346257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 18:52:00.346405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 18:52:00.346418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.346523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.346537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.346772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.346786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.346950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.346964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.347147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.347182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.347416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.347447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.347627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.347658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.347802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.347832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.348051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.348080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.348257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.348288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.348426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.348455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.348653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.348667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.348913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.348942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.349091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.349433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.349464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.349603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.349632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.349834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.349863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.350011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.350041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.350269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.350299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.350449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.350478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.350616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.350647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.350821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.350850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.351051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.351080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.351234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.351264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.351482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.351512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.351732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.351745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.351944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.351957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.352069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.352082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.352272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.352286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.352451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.352465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.352564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.352577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.352689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.352703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.352820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.352834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.352955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.352968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.353149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.353162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.353451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.353465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.353640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.353654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.353825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.353839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.353943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.353957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.354080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 18:52:00.354094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 18:52:00.354309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.354323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.354432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.354446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.354558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.354571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.354688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.354701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.354821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.354837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.354945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.354959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.355158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.355172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.355294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.355308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.355421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.355435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.355622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.355635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.355755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.355769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.355879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.355892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.356081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.356094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.356268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.356283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.356411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.356424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.356527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.356541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.356717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.356731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.356840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.356853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.356955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.356968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.357087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.357100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.357202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.357216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.357329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.357342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.357445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.357459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.357572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.357585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.357772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.357786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.357957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.357971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.358205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.358218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.358335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.358350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.358536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.358550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.358669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.358683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.358853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.358867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.358973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.358987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.359219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.359237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.359411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.359424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.359530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.359544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.359724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.359738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.359936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.359950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.360136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.360166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.360313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.360344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.360518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.360547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 18:52:00.360705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 18:52:00.360734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.360949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.360979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.361127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.361156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.361365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.361395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.361661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.361677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.361797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.361810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.361920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.361934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.362121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.362135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.362358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.362373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.362594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.362623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.362840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.362870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.363092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.363122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.363405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.363436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.363578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.363607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.363773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.363803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.364072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.364102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.364311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.364343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.364598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.364612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.364724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.364737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.364889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.364902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.365085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.365098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.365351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.365365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.365618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.365647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.365944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.365973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.366190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.366220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.366437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.366467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.366733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.366746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.367001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.367015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.367186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.367199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.367395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.367426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.367582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.367611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.367800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.367869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.368032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.368066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.368253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.368286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.368512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.368542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.368762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.368792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.368998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.369027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.369245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.369259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.369454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.369468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.369603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.369617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.369732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.369745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 18:52:00.369930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 18:52:00.369943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.370069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.370083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.370230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.370244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.370355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.370369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.370562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.370576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.370696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.370709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.370894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.370908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.371077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.371090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.371263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.371278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.371407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.371420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.371601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.371614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.371738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.371753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.371865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.371879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.371988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.372002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.372092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.372106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.372236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.372250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.372442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.372456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.372719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.372735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.372921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.372935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.373060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.373074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.373207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.373221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.373360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.373374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.373556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.373571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.373689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.373703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.373822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.373836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.374006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.374019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.374286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.374300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.374423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.374437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.374609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.374622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.374790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.374805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.374987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.375001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.375110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.375124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.375310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.375324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.375518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.375548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.375766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.375795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.376004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.376034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.376172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.376201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.376359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.376390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.376528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.376559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.376851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.376865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.376981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.376994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 18:52:00.377113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 18:52:00.377127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.377260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.377275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.377465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.377478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.377593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.377613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.377739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.377752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.377921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.377934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.378106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.378120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.378247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.378261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.378460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.378474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.378659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.378674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.378795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.378810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.378962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.378992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.379143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.379173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.379342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.379373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.379510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.379524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.379729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.379759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.379918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.379948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.380145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.380213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.380428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.380444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.380626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.380657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.380811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.380842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.380992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.381022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.381207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.381222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.381319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.381332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.381508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.381522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.381622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.381636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.381746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.381759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.381948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.381978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.382195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.382237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.382379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.382410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.382632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.382650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.382759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.382773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.382995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.383009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.383114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.383128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.383241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.383256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.383366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.383379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 18:52:00.383567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 18:52:00.383580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.383764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.383777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.384011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.384025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.384152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.384165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.384288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.384302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.384422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.384436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.384679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.384709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.384857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.384887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.385194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.385243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.385460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.385490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.385643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.385672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.385878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.385908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.386042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.386072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.386240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.386270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.386418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.386432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.386639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.386669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.386873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.386903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.387104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.387133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.387341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.387355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.387486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.387500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.387741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.387771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.388054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.388091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.388251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.388282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.388500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.388514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.388697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.388711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.388912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.388942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.389085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.389116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.389273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.389304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.389516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.389546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.389815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.389844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.390001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.390031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.390178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.390208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.390375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.390406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.390620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.390650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.390919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.390948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.391169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.391199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.391427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.391458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.391669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.391683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.391918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.391932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.392056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.392069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.392188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.392201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 18:52:00.392453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 18:52:00.392467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.392574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.392588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.392768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.392782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.392908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.392922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.393030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.393044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.393218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.393257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.393447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.393461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.393565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.393581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.393769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.393782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.393963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.393977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.394177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.394191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.394304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.394318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.394444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.394458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.394651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.394665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.394834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.394848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.395048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.395061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.395204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.395217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.395330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.395344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.395530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.395543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.395807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.395821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.395948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.395961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.396078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.396092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.396273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.396288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.396399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.396413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.396584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.396598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.396826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.396856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.397016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.397045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.397262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.397292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.397512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.397525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.397701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.397715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.397899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.397928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.398134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.398164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.398369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.398400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.398639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.398654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.398832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.398845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.398975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.398989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.399231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.399245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.399417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.399431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.399538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.399551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.399728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.399741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.399983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.400013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 18:52:00.400171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 18:52:00.400201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.400510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.400540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.400680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.400710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.400950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.400980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.401122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.401151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.401360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.401391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.401537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.401567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.401788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.401805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.401911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.401925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.402096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.402110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.402309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.402322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.402507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.402520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.402634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.402647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.402840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.402854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.403027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.403040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.403246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.403277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.403422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.403452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.403599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.403629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.403784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.403814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.404019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.404049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.404217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.404264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.404500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.404530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.404817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.404847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.404995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.405025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.405237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.405269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.405491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.405520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.405665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.405678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.405918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.405949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.406090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.406119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.406326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.406358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.406560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.406574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.406708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.406721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.406845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.406858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.406974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.406987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.407167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.407181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.407288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.407305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.407399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.407415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.407535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.407549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.407671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.407684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.407881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.407895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.408015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.408029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 18:52:00.408233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 18:52:00.408247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.408421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.408435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.408551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.408564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.408669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.408683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.408915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.408928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.409036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.409050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.409158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.409173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.409377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.409392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.409491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.409504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.409612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.409625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.409727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.409740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.409979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.409993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.410171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.410185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.410317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.410330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.410466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.410480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.410596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.410610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.410851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.410865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.410986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.411000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.411112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.411126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.411261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.411275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.411451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.411465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.411707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.411720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.411827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.411840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.411945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.411958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.412135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.412149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.412266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.412280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.412388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.412402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.412509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.412523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.412641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.412654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.412857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.412871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.413039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.413053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.413162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.413176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.413310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.413324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.413557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.413573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.413758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.413772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.413886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 18:52:00.413899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 18:52:00.414029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.414042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.414150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.414164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.414282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.414295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.414414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.414428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.414559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.414573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.414698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.414711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.414815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.414830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.415034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.415048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.415163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.415176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.415279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.415293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.415423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.415436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.415610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.415623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.415749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.415763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.415956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.415969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.416076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.416089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.416198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.416211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.416348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.416375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.416476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.416488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.416661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.416671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.416849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.416859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.416952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.416962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.417952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.417962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.418068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.418078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.418280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.418290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.418392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.418402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.418517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.418527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.418636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.418646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.418765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.418775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.418934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.418944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.419155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.419300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.419423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.419549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.419659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.419776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 18:52:00.419881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 18:52:00.419978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.419988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.420168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.420178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.420375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.420386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.420514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.420524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.420641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.420651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.420741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.420753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.420831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.420840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.420933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.420942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.421065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.421075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.421245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.421256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.421419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.421428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.421534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.421544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.421646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.421656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.421749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.421760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.421860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.421870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.422042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.422169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.422283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.422387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.422565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.422682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.422864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.422998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.423929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.423938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.424039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.424048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.424209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.424219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.424330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.424340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.424439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.424449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.424598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.424608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.424706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.424716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.424905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.424915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.425093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.425103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.425213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.425222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.425320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.425329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.425428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.425438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.425537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.425548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.425708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.425718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 18:52:00.425799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 18:52:00.425809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.425970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.425980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.426132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.426243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.426367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.426545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.426657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.426781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.426882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.426994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.427968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.427979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.428088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.428098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.428205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.428215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.428381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.428391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.428552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.428562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.428680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.428690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.428847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.428856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.428967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.428976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.429967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.429977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.430144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.430154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.430321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.430331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.430562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.430572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.430667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.430676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.430793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.430803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.430877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.430886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.430984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.430995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.431104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.431113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.431343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.431353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.431457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.431467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.431562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 18:52:00.431572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 18:52:00.431751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.431761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.431882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.431891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.432923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.432933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.433108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.433117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.433230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.433241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.433414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.433424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.433532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.433543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.433640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.433650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.433811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.433821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.433922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.433932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.434095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.434104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.434199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.434208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.434319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.434330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.434486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.434496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.434621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.434631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.434731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.434741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.434854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.434864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.435028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.435038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.435135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.435146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.435234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.435244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.435418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.435429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.435547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.435557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.435662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.435671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.435933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.435943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.436113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.436122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.436237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.436248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.436364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.436374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.436648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.436658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.436903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.436914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.437036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.437045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.437147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.437157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.437275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.437285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.437452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.437462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.437579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.437591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 18:52:00.437699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 18:52:00.437709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.437869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.437879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.438047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.438057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.438242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.438252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.438411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.438421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.438588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.438598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.438698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.438709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.438822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.438831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.438949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.438959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.439151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.439160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.439259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.439269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.439384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.439393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.439554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.439564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.439680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.439690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.439795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.439805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.439939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.439948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.440064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.440073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.440262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.440273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.440373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.440383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.440568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.440579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.440677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.440686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.440769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.440778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.440883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.440893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.441013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.441022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.441140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.441149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.441264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.441274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.441398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.441408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.441579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.441589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.441757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.441767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.441863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.441873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.442039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.442048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.442131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.442141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.442315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.442325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.442447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.442456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.442605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.442614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.442711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.442721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.442828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.442838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.443953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.443963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.444167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.444177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.444285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.444296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.444394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 18:52:00.444404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 18:52:00.444507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.444517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.444623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.444633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.444794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.444804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.444904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.444914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.445002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.445012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.445180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.445190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.445291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.445301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.445448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.445458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.445569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.445578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.445767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.445777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.445897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.445907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.446973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.446982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.447078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.447088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.447199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.447209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.447387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.447398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.447497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.447507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.447669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.447678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.447774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.447783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.447942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.447953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.448065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.448075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.448251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.448261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.448372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.448382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.448492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.448502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.448599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.448609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.448729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.448741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.448848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.448858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.449036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.449045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.449158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.449168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.449332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.449342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.449458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.449467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.449557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.449567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.449732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.449742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.449910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.449919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.450020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.450030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.450199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.450209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.450305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.450315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.450421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.450430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.450602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.450611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.450723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.450733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.450893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.450903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.876 qpair failed and we were unable to recover it. 00:26:43.876 [2024-07-15 18:52:00.451010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.876 [2024-07-15 18:52:00.451019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.451131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.451140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.451300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.451310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.451402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.451412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.451558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.451568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.451727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.451736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.451859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.451869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.451968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.451978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.452960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.452969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.453070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.453080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.453175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.453185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.453324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.453334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.453496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.453505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.453620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.453630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.453795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.453805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.453907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.453916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.454964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.454974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.455076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.455085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.455184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.455195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.455309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.455319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.455417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.455427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.455677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.455687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.455796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.455806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.455930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.455940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.456056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.456066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.456233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.456243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.456338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.456347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.456516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.456526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.456695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.456705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.456809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.456819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.456940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.456949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.457055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.457064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.457229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.457239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.457413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.457424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.457546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.457557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.877 [2024-07-15 18:52:00.457656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.877 [2024-07-15 18:52:00.457666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.877 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.457839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.457848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.457990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.458096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.458216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.458313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.458495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.458677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.458792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.458901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.458911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.459015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.459025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.459209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.459219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.459321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.459331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.459456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.459466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.459668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.459681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.459851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.459860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.459967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.459977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.460092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.460102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.460185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.460195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.460359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.460369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.460474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.460483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.460605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.460614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.460858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.460868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.461036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.461046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.461175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.461185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.461339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.461349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.461531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.461541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.461714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.461724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.461893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.461903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.462010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.462020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.462110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.462120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.462211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.462221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.462323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.462333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.462586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.462596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.462691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.462700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.462898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.462908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.463022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.463032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.463209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.463219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.463388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.463398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.463494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.463504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.463663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.463673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.463842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.463852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.463963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.463973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.464174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.464183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.464362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.464372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.878 [2024-07-15 18:52:00.464466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.878 [2024-07-15 18:52:00.464476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.878 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.464576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.464587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.464747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.464756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.464868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.464878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.465053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.465063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.465208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.465217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.465334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.465344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.465457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.465467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.465570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.465580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.465678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.465689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.465799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.465809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.466051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.466061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.466163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.466173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.466305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.466316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.466411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.466421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.466544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.466554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.466730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.466740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.466852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.466861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.467020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.467030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.467205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.467215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.467327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.467337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.467445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.467455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.467656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.467666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.467842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.467852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.467947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.467957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.468883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.468893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.469914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.469924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.470084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.470094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.470269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.470279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.470377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.470388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.470486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.470496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.470678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.470688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.470852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.470861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.471037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.471047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.471215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.471233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.471348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.471360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.471470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.879 [2024-07-15 18:52:00.471479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.879 qpair failed and we were unable to recover it. 00:26:43.879 [2024-07-15 18:52:00.471570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.471579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.471718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.471728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.471894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.471904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.472008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.472019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.472125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.472135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.472238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.472249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.472359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.472369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.472566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.472576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.472772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.472781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.472910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.472919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.473079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.473088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.473173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.473183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.473290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.473301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.473467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.473477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.473597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.473606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.473771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.473781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.473901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.473911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.474015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.474025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.474135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.474144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.474251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.474261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.474372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.474382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.474519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.474529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.474727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.474736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.474902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.474912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.475024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.475153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.475298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.475417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.475535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.475713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.475875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.475995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.476124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.476260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.476388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.476515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.476679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.476853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.476973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.476990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.477252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.477267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.477455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.477468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.477588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.477601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.477720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.477734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.477849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.477862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.477969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.477983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.478160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.478173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.478286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.478301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.478428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.478441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.478624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.478637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.478874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.478887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.479082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.880 [2024-07-15 18:52:00.479095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.880 qpair failed and we were unable to recover it. 00:26:43.880 [2024-07-15 18:52:00.479211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.479229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.479342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.479356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.479461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.479474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.479585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.479599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.479711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.479725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.479847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.479860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.479967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.479981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.480186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.480200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.480381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.480395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.480507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.480521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.480625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.480639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.480751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.480764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.480879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.480893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.481019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.481031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.481197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.481209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.481372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.481383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.481478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.481487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.481584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.481593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.481760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.481770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.481962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.481972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.482066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.482076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.482253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.482263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.482439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.482449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.482555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.482565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.482661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.482671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.482851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.482861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.483977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.483986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.484096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.484105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.484355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.484364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.484528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.484538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.484741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.484751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.484914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.484923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.485029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.485038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.485214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.485228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.485410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.485420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.485531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.485541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.485698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.485708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.485821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.485830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.485937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.485946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.881 [2024-07-15 18:52:00.486113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.881 [2024-07-15 18:52:00.486123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.881 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.486242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.486252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.486357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.486367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.486543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.486552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.486670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.486680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.486791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.486802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.486919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.486929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.487122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.487132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.487299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.487310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.487434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.487443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.487545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.487555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.487657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.487668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.487849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.487858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.487955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.487964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.488963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.488975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.489077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.489087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.489198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.489208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.489334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.489344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.489505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.489515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.489686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.489696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.489790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.489800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.489896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.489906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.490018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.490028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.490232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.490242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.490404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.490415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.490512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.490523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.490684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.490694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.490863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.490873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.490972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.490982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.491083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.491093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.491274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.491284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.491388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.491398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.491567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.491576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.491688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.491698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.491800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.491809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.491904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.491913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.492015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.492026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.492189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.492199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.492326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.492337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.492453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.492463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.492585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.492595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.492835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.492845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.492976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.492986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.493168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.493177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.493295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.493305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.493406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.493416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.493529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.493539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.882 [2024-07-15 18:52:00.493634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.882 [2024-07-15 18:52:00.493644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.882 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.493742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.493752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.493844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.493854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.494970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.494980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.495134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.495144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.495243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.495253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.495431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.495440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.495604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.495614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.495715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.495725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.495836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.495846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.495943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.495954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.496082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.496092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.496200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.496210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.496309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.496319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.496439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.496449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.496559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.496568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.496756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.496766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.496941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.496950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.497071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.497081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.497240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.497250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.497419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.497428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.497535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.497545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.497656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.497666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.497896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.497906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.498005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.498015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.498112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.498122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.498371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.498382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.498506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.498516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.498755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.498765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.498850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.498860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.498963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.498973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.499093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.499103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.499199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.499209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.499373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.499385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.499506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.499720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.499730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.499853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.499864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.499969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.499978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.500970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.500980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.501175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.501184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.501368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.883 [2024-07-15 18:52:00.501378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.883 qpair failed and we were unable to recover it. 00:26:43.883 [2024-07-15 18:52:00.501484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.501494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.501680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.501690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.501788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.501798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.502020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.502030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.502115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.502125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.502230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.502240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.502421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.502431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.502597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.502607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.502786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.502795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.502888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.502898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.503984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.503994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.504092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.504102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.504217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.504231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.504340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.504350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.504470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.504479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.504708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.504717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.504822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.504832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.504937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.504948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.505976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.505986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.506936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.506946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.507014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.507023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.507199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.507208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.507323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.507333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.507564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.507574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.507669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.507679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.884 qpair failed and we were unable to recover it. 00:26:43.884 [2024-07-15 18:52:00.507779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.884 [2024-07-15 18:52:00.507789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.507920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.507929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.508027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.508037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.508202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.508212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.508321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.508332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.508505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.508514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.508681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.508691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.508793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.508802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.509023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.509140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.509271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.509422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.509529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.509732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.509899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.509994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.510176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.510294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.510403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.510521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.510721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.510845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.510958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.510968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.511960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.511969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.512085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.512095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.512211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.512220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.512385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.512395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.512477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.512486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.512585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.512595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.512700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.512709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.512820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.512829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.513010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.513020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.513128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.513137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.513254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.513264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.513374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.513383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.513548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.513558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.513824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.513834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.513936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.513947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.514049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.514059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.514158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.514168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.514333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.514342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.514500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.514510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.514612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.514622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.514728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.514738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.514907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.514919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.515102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.515111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.515284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.885 [2024-07-15 18:52:00.515294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.885 qpair failed and we were unable to recover it. 00:26:43.885 [2024-07-15 18:52:00.515458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.515467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.515565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.515575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.515743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.515752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.515979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.515989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.516093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.516103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.516198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.516208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.516307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.516317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.516571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.516581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.516738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.516748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.516868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.516877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.517063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.517073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.517304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.517314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.517498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.517507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.517610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.517620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.517715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.517725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.517889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.517899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.518924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.518934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.519081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.519090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.519261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.519271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.519439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.519450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.519544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.519554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.519659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.519669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.519789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.519799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.519900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.519910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.520015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.520024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.520117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.520127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.520237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.520247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.520473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.520483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.520654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.520664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.520838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.520848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.520965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.520977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.521971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.521980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.522138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.522148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.522315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.522325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.522583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.522593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.522777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.522786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.522905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.522915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.523104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.523114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.523211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.523221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.523345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.523355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.523470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.523480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.523665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.523675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.886 qpair failed and we were unable to recover it. 00:26:43.886 [2024-07-15 18:52:00.523796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.886 [2024-07-15 18:52:00.523805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.523988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.523998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.524079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.524089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.524268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.524278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.524390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.524400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.524532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.524543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.524703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.524713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.524809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.524819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.524903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.524913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.525012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.525023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.525182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.525192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.525302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.525312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.525421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.525431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.525602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.525614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.525723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.525733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.525913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.525923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.526043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.526052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.526157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.526166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.526256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.526266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.526360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.526371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.526484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.526493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.526661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.526673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.526861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.526871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.527096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.527106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.527334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.527345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.527447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.527457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.527578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.527589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.527824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.527834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.527927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.527937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.528076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.528086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.528249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.528259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.528384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.528394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.528583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.528593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.528709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.528718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.528785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.528794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.528910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.528920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.529036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.529046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.529157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.529166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.529296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.529306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.529475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.529485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.529630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.529640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.529744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.529754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.529877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.529887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:43.887 [2024-07-15 18:52:00.530052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.887 [2024-07-15 18:52:00.530062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:43.887 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.530170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.530181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.530288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.530298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.530459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.530470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.530568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.530580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.530685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.530695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.530865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.530875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.531043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.531054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.531167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.531177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.531348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.531358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.531516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.531526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.531626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.531636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.531742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.531752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.531850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.531861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.532018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.532028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.532191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.532201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.532319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.532331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.532426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.532437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.532611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.532625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.532805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.532817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.532918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.532929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.533088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.533099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.533212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.533223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.533505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.533515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.533641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.533652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.533749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.533760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.533872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.533882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.534064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.534074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.534179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.534189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.534297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.534311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.534488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.534498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.534618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.534628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.534751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.534761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.534919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.534929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.535047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.535058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.535271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.535281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.535457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.535469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 18:52:00.535753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 18:52:00.535762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.535959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.535969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.536132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.536143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.536334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.536344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.536497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.536508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.536671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.536681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.536797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.536807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.536921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.537037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.537048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.537221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.537235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.537402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.537412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.537605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.537614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.537775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.537785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.537898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.537908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.538008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.538018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.538134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.538144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.538314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.538324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.538477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.538486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.538779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.538788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.538968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.538978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.539144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.539154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.539263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.539278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.539453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.539464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.539567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.539577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.539752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.539762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.539987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.539997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.540180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.540191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.540355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.540365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.540566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.540575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.540778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.540788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.540898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.540908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.541020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.541030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.541141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.541151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.541269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.541279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.541384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.541394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.541626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.541637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.541753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.541763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.541945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.541955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.542067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.542077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.542171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.542180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.542280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.542292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.542525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.542534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.542710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.542720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 18:52:00.542830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 18:52:00.542839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.543952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.543961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.544204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.544214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.544298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.544308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.544478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.544487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.544609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.544618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.544728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.544737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.544905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.544915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.545022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.545032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.545127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.545137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.545301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.545312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.545421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.545433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.545614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.545624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.545717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.545726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.545841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.545850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.546827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.546836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.547013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.547023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.547249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.547259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.547435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.547446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.547619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.547628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.547749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.547759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.547919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.547929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.548118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.548128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.548366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.548377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.548556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.548566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.548680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.548690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.548807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.548817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.548980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.548990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.549110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.549120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 18:52:00.549294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 18:52:00.549304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.549420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.549429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.549607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.549617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.549727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.549737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.549841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.549956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.549965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.550986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.550996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.551099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.551109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.551208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.551219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.551317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.551328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.551432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.551442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.551637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.551647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.551741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.551751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.551850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.551860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.552027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.552200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.552386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.552509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.552624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.552744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.552880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.552992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.553110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.553288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.553407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.553596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.553709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.553830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.553936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.553946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 18:52:00.554886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 18:52:00.554896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.555061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.555071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.555179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.555188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.555297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.555307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.555401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.555411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.555514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.555524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.555687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.555697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.555862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.555872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.556034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.556043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.556156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.556166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.556342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.556353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.556457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.556467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.556589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.556601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.556701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.556711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.556884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.556894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.557956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.557966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.558110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.558119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.558283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.558293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.558390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.558400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.558588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.558598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.558723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.558733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.558828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.558837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.558997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.559007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.559126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.559135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.559312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.559322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.559417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.559426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.559522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.559532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.559712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 18:52:00.559722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 18:52:00.559818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.559828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.559946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.559956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.560068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.560078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.560176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.560187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.560374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.560385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.560484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.560494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.560608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.560618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.560868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.560878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.560976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.560986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.561169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.561178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.561295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.561305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.561411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.561421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.561542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.561551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.561648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.561657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.561755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.561764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.561996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.562006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.562179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.562189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.562355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.562367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.562463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.562473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.562638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.562648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.562816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.562826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.562930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.562940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.563048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.563057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.563147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.563157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.563323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.563333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.563433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.563443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.563552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.563561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.563659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.563669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.563778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.563788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.564007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.564017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.564178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.564188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.564417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.564427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.564517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.564526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.564685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.564695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.564793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.564802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.564902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.564912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.565023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.565211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.565329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.565458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.565657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.565762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 18:52:00.565886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 18:52:00.565991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.566000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.566186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.566196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.566312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.566322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.566487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.566497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.566627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.566637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.566778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.566788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.566945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.566954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.567067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.567077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.567192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.567202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.567300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.567310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.567431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.567441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.567650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.567660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.567831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.567842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.568018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.568028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.568170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.568184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.568365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.568375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.568601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.568612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.568794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.568804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.569019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.569030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.569143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.569152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.569301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.569311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.569421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.569431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.569609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.569618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.569731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.569741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.569949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.569959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.570075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.570085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.570351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.570362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.570526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.570536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.570642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.570652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.570754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.570764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.570906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.570916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.571022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.571032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.571191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.571201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.571330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.571340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.571447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.571460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.571571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.571580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.571745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.571755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.571894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.571905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.572092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.572103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.572245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.572255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.572358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.572368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.572571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 18:52:00.572581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 18:52:00.572685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.572695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.572815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.572824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.572998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.573007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.573182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.573192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.573310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.573320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.573442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.573452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.573634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.573644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.573746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.573757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.573898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.573908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.574014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.574024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.574143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.574153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.574300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.574311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.574490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.574502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.574606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.574616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.574821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.574831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.575011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.575021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.575193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.575203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.575311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.575321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.575572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.575582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.575671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.575681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.575842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.575852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.576039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.576049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.576289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.576298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.576454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.576464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.576688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.576697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.576867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.576877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.576995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.577005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.577167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.577178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.577369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.577379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.577456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.577465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.577644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.577655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.577839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.577849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.577963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.577973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.578143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.578153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.578254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.578264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.578375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.578385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.578551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.578560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.578739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.578749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.578922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.578931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.579105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.579115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.579278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.579289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.579393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.579402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.579525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 18:52:00.579535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 18:52:00.579741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.579751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.579913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.579922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.580027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.580037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.580140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.580150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.580315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.580324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.580495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.580505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.580625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.580635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.580795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.580804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.580912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.580922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.581970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.581979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.582085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.582095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.582278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.582288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.582392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.582402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.582523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.582533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.582630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.582640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.582746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.582756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.582953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.582962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.583186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.583196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.583315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.583326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.583555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.583566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.583691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.583701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.583951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.583961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.584131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.584141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.584339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.584355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.584497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.584506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.584631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.584641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.584917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.584927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.585090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.585099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.585268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.585279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 18:52:00.585477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 18:52:00.585489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.585619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.585629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.585840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.585851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.585963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.585973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.586102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.586111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.586286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.586296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.586497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.586506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.586688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.586697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.586945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.586955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.587130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.587140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.587258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.587268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.587385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.587394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.587647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.587657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.587827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.587837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.588092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.588103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.588328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.588339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.588518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.588528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.588707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.588717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.588909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.588919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.589151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.589161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.589310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.589321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.589480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.589490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.589587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.589597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.589826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.589836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.590010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.590020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.590166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.590176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.590354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.590364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.590620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.590630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.590741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.590751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.590872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.590881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.591044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.591054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.591284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.591294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.591544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.591554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.591678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.591687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.591799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.591809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.592006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.592016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.592245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.592255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.592385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.592496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.592505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.592678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.592688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.592799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.592811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.593052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.593062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.593288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.593298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.593509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 18:52:00.593518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 18:52:00.593693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.593703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.593961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.593971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.594165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.594175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.594356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.594368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.594498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.594507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.594691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.594701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.594894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.594904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.595132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.595142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.595250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.595261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.595461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.595471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.595592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.595602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.595791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.595801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.596003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.596015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.596213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.596236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.596366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.596378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.596504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.596514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.596681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.596691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.596808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.596819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.597017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.597027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.597131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.597141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.597346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.597359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.597536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.597546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.597672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.597683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.597818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.597828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.598061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.598071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.598252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.598262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.598386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.598396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.598581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.598591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.598719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.598729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.599022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.599032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.599286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.599296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.599433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.599443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.599619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.599628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.599750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.599759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.599933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.599944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.600112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.600122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.600236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.600249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.600473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.600483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.600711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.600721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.600946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.600956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.601115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.601126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.601390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.601400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.601654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 18:52:00.601664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 18:52:00.601776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.601786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.601943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.601953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.602191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.602201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.602389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.602399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.602577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.602587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.602843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.602853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.603050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.603060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.603304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.603314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.603435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.603445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.603603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.603613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.603728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.603739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.603988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.603998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.604109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.604118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.604350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.604360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.604480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.604490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.604602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.604611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.604793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.604802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.605052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.605062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.605316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.605326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.605439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.605449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.605679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.605689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.605891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.605901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.606127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.606137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.606340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.606350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.606487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.606497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.606669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.606679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.606963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.606972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.607107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.607117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.607321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.607332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.607514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.607524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.607786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.607797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.608008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.608017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.608259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.608269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.608450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.608462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.608578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.608588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.608784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.608794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.609114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.609123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.609293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.609303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.609422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.609433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.609611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.609622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.609793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.609802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.609979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.609989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.610163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 18:52:00.610172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 18:52:00.610351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.610362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.610559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.610569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.610752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.610762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.610884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.610894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.610990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.610999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.611230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.611241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.611401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.611412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.611659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.611670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.611794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.611804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.612016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.612025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.612222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.612236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.612504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.612513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.612625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.612635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.612829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.612839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.613093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.613103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.613378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.613389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.613592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.613603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.613721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.613730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.613920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.613930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.614105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.614115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.614248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.614259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.614489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.614499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.614672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.614682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.614943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.614953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.615150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.615160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.615364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.615375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.615626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.615636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.615754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.615764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.616044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.616054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.616214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.616228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.616433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.616445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.616572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.616582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.616712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.616722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.616965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.616975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.617204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.617214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.617407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.617417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.617609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.617619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 18:52:00.617746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 18:52:00.617756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.617985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.617995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.618267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.618278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.618392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.618403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.618564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.618575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.618767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.618776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.619060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.619070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.619361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.619371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.619649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.619659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.619900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.619910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.620080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.620090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.620259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.620269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.620392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.620402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.620593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.620603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.620851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.620862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.621042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.621052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.621233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.621243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.621439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.621449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.621678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.621688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.621810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.621820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.622098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.622108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.622296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.622306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.622556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.622566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.622699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.622709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.623030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.623040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.623218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.623233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.623403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.623414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.623581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.623591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.623794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.623804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.623986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.623996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.624175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.624185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.624362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.624373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.624572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.624582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.624697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.624710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.624908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.624918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.625188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.625198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.625409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.625419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.625596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.625606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.625853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.625863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.626036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.626047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.626298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.626309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.626471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.626481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.626708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.626718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.626899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 18:52:00.626908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 18:52:00.627152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.627163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.627421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.627432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.627550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.627560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.627755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.627765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.627940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.627950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.628126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.628136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.628320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.628331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.628449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.628459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.628630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.628640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.628755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.628764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.629044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.629055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.629303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.629313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.629496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.629506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.629736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.629746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.629953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.629963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.630176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.630187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.630394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.630404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.630579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.630589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.630693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.630703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.630827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.630837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.631078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.631087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.631367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.631377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.631479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.631490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.631685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.631696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.631916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.631927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.632198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.632209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.632355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.632365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.632544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.632554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.632728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.632738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.632922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.632934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.633159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.633169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.633344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.633355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.633474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.633484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.633658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.633669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.633863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.633873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.634178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.634188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.634365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.634376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.634534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.634543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.634705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.634715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.634893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.634903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.635130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.635140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.635334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.635344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 18:52:00.635457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 18:52:00.635466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.635600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.635610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.635724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.635734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.636071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.636082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.636382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.636392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.636556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.636566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.636788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.636797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.637041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.637051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.637243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.637253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.637381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.637391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.637562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.637573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.637786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.637796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.637988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.637997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.638211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.638221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.638389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.638400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.638592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.638602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.638775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.638784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.638967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.638977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.639139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.639149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.639352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.639363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.639474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.639485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.639662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.639671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.639806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.639816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.640009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.640019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.640275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.640286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.640465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.640475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.640607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.640617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.640777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.640789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.640969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.640979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.641157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.641168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.641428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.641438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.641615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.641625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.641811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.641822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.641997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.642007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.642253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.642263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.642381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.642390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.642568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.642578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.642747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.642757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.642946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.642955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.643205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.643215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.643415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.643425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.643600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.643611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.643774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.643784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 18:52:00.644021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 18:52:00.644032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.644281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.644291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.644413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.644423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.644551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.644561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.644789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.644799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.645050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.645060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.645301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.645311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.645565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.645575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.645747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.645757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.645951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.645961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.646195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.646205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.646334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.646345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.646523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.646532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.646761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.646771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.647039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.647049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.647298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.647309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.647430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.647440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.647567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.647577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.647679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.647688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.647864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.647873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.648076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.648086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.648270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.648280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.648524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.648534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.648740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.648751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.648995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.649007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.649138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.649148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.649342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.649353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.649476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.649486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.649596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.649606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.649785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.649795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.649980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.649990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.650176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.650186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.650361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.650372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.650551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.650560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.650732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.650741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.650954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.650964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.651213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.651223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.651450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 18:52:00.651460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 18:52:00.651664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.651674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.651797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.651807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.652064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.652074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.652329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.652339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.652570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.652580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.652765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.652775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.653056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.653066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.653249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.653260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.653444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.653454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.653579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.653589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.653746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.653756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.653858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.653867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.654106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.654116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.654358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.654369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.654522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.654532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.654703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.654713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.654878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.654888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.655116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.655126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.655380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.655390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.655657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.655667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.655832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.655842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.656155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.656165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.656334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.656345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.656533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.656543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.656672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.656682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.656863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.656873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.657077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.657089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.657196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.657206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.657386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.657396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.657647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.657657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.657862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.657872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.658144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.658154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.658393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.658403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.658529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.658539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.658718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.658728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.658929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.658938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.659133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.659142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.659375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.659385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.659594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.659604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.659723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.659732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.659930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.659939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.660209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.660220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.660457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 18:52:00.660467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 18:52:00.660629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.660639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.660767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.660777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.661044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.661054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.661247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.661257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.661382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.661392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.661570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.661580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.661739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.661749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.661925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.661935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.662168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.662178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.662428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.662440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.662557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.662567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.662727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.662737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.663017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.663027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.663146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.663156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.663401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.663412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.663593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.663603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.663769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.663779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.663954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.663964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.664087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.664097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.664320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.664331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.664452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.664462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.664709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.664719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.664920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.664929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.665127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.665138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.665403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.665414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.665545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.665555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.665668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.665678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.665948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.665958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.666166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.666176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.666338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.666348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.666577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.666587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.666766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.666776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.667048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.667058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.667286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.667296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.667419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.667429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.667539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.667549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.667712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.667722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.667908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.667918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.668091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.668100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.668279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.668290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.668496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.668506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.668672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.668683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.668936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.668946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.669129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 18:52:00.669139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 18:52:00.669352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.669363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.669616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.669625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.669820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.669830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.670124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.670135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.670424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.670434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.670618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.670628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.670744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.670754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.671081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.671091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.671251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.671261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.671435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.671445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.671623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.671634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.671824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.671834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.672040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.672050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.672284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.672294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.672474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.672483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.672658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.672669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.672971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.672981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.673254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.673264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.673461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.673471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.673641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.673652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.673819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.673828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.674057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.674067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.674274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.674285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.674524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.674535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.674710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.674719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.674991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.675000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.675119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.675129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.675310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.675320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.675506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.675515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.675617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.675628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.675829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.675840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.675956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.675966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.676216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.676229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.676408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.676419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.676585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.676594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.676714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.676724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.676940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.676950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.677218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.677261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.677496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.677526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.677749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.677778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.677982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.677993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.678189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.678199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.678456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.678467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.678722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-07-15 18:52:00.678731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-07-15 18:52:00.678940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.678949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.679143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.679172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.679471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.679522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.679755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.679787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.679947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.679961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.680237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.680268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.680534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.680564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.680732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.680762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.680921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.680950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.681167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.681197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.681379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.681391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.681587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.681617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.681849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.681879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.682188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.682218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.682451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.682481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.682726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.682756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.683060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.683090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.683403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.683413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.683537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.683548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.683800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.683810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.684071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.684101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.684321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.684351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.684521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.684550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.684822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.684852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.685140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.685150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.685259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.685270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.685379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.685389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.685642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.685653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.685852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.685862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.686121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.686151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.686453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.686484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.686733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.686763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.686991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.687022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.687289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.687320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.687516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.687526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-07-15 18:52:00.687711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-07-15 18:52:00.687720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.687900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.687930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.688146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.688175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.688430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.688460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.688687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.688717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.688987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.689016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.689251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.689261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.689380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.689392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.689515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.689526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.689700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.689710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.689909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.689919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.690181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.690211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.690457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.690488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.690659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.690688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.691031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.691061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.691384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.691416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.691714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.691743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.692007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.692037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.692322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.692352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.692526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.692556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.692710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.692739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.692965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.692995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.693140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.693170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.693467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.693498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.693675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.693705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.693948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.693978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.694126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.694155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.694401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.694432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.694702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.694732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.694950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.694980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.695194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.695223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.695443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.695473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.695612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.695642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.695796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.695825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.696052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.696082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.696301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.696333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.696551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.696561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.696812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.696833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.697021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.697030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.697203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.697214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-07-15 18:52:00.697330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-07-15 18:52:00.697341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.697466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.697476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.697659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.697669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.697767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.697776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.697894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.697904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.698105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.698115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.698409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.698419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.698529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.698542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.698751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.698760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.698942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.698951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.699147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.699156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.699315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.699324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.699486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.699495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.699628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.699638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.699922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.699932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.700131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.700141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.700304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.700316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.700596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.700625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.700800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.700829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.701089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.701130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.701380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.701391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.701560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.701571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.701783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.701813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.701960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.701989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.702191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.702221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.702388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.702397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.702573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.702610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.702820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.702849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.703169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.703199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.703478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.703509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.703684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.703714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.703876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.703906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.704110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.704140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.704368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.704399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.704636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.704666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.704880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.704910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.705210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.705220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.705484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.705494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.705605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.705615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.705780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.705789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-07-15 18:52:00.706023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-07-15 18:52:00.706053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.706301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.706340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.706518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.706527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.706712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.706722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.706886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.706896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.707094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.707104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.707291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.707302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.707420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.707432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.707547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.707557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.707717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.707727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.707977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.708006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.708283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.708314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.708479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.708489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.708692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.708721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.708949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.708979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.709269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.709296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.709523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.709533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.709654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.709665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.709793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.709803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.709992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.710002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.710276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.710287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.710412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.710422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.710597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.710607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.710819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.710829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.710935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.710945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.711199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.711209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.711398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.711408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.711637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.711667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.711877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.711908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.712220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.712235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.712420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.712430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.712611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.712640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.712944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.712974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.713186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.713221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.713487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.713498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.713768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.713778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.714003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.714014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.714258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.714269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.714547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.714556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.714732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.714742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.714933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.714943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-07-15 18:52:00.715181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-07-15 18:52:00.715214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.715510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.715541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.715758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.715788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.716103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.716134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.716283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.716314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.716525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.716535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.716716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.716727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.716992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.717021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.717263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.717295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.717579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.717589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.717721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.717730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.717925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.717935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.718223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.718266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.718542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.718572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.718792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.718822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.719106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.719136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.719356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.719366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.719550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.719560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.719779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.719788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.720036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.720066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.720319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.720363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.720528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.720538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.720670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.720682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.720845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.720855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.721130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.721379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.721389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.721554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.721564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.721741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.721750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.721985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.722015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.722237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.722267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.722424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.722454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.722772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.722782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.722979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.722990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.723246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.723256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.723508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-07-15 18:52:00.723518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-07-15 18:52:00.723682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.723692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.723900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.723935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.724236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.724267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.724563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.724592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.724837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.724866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.725129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.725159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.725459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.725490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.725709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.725738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.726012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.726040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.726290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.726321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.726548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.726557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.726794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.726805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.727060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.727090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.727266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.727298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.727468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.727498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.727766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.727795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.728019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.728049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.728207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.728246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.728475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.728485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.728714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.728724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.728930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.729146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.729176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.729491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.729521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.729723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.729753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.730039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.730068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.730384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.730394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.730694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.730704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.730974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.730984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.731233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.731244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.731503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.731513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.731645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.731655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.731831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.731840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.731953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.731963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.732192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.732202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.732446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.732457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.732636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.732646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.732911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.732922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.733183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.733212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.733538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.733606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.733848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.733881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-07-15 18:52:00.734127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-07-15 18:52:00.734158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.734383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.734416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.734683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.734697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.734936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.734966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.735256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.735287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.735524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.735538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.735707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.735720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.735987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.736017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.736243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.736274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.736568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.736597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.736835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.736865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.737105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.737144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.737439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.737469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.737674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.737687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.737816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.737830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.738105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.738134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.738374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.738405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.738613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.738642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.738958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.738987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.739277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.739307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.739535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.739549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.739792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.739806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.740046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.740060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.740326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.740339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.740592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.740605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.740893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.740907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.741083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.741097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.741385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.741399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.741619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.741649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.741866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.741895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.742132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.742161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.742372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.742403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.742717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.742746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.742968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.742998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.743212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.743256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.743529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.743559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.743769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.743798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.744088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.744117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.744417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.744449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.744721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.744751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.745035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-07-15 18:52:00.745065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-07-15 18:52:00.745371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.745385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.745506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.745520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.745691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.745705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.745898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.745927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.746175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.746204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.746424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.746455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.746739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.746752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.746989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.747003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.747172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.747185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.747450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.747481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.747780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.747815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.748065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.748094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.748312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.748326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.748526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.748540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.748709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.748752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.748978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.749007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.749250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.749281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.749627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.749670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.749959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.749987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.750201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.750240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.750482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.750512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.750733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.750763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.751059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.751088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.751256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.751287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.751583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.751597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.751786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.751800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.752017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.752030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.752297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.752327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.752574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.752603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.752889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.752918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.753131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.753160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.753401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.753416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.753677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.753690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.753794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.753807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.753986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.754000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.754277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.754291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.754577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.754590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.754880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.754907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.755168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.755179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.755429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.755440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.200 [2024-07-15 18:52:00.755685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.200 [2024-07-15 18:52:00.755695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.200 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.755967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.755976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.756154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.756164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.756356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.756386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.756661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.756692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.756914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.756943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.757242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.757273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.757567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.757598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.757896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.757925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.758223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.758263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.758537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.758567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.758845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.758875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.759099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.759129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.759406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.759416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.759645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.759655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.759827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.759837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.759944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.759954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.760208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.760218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.760329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.760339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.760636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.760665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.760914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.760943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.761243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.761274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.761573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.761603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.761917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.761947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.762243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.762274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.762571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.762601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.762898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.762927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.763242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.763272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.763494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.763524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.763798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.763808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.763972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.763981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.764184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.764214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.764495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.764526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.764818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.764847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.765149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.765180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.765433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.765464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.765677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.765706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.765976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.766011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.766305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.766336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.766575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.201 [2024-07-15 18:52:00.766604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.201 qpair failed and we were unable to recover it. 00:26:44.201 [2024-07-15 18:52:00.766870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.766881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.766987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.766997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.767234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.767244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.767521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.767551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.767776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.767806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.768093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.768123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.768420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.768430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.768654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.768664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.768834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.768843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.769032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.769061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.769305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.769336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.769616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.769647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.769851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.769881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.770157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.770185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.770448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.770459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.770622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.770632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.770805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.770841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.771086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.771116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.771338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.771369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.771642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.771651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.771856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.771866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.772142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.772152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.772361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.772372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.772569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.772578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.772835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.772865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.773144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.773173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.773496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.773526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.773749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.773778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.774048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.774077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.774360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.774391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.774697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.774726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.774940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.774969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.775265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.775296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.775569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.775599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.775867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.775896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.776211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.202 [2024-07-15 18:52:00.776249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.202 qpair failed and we were unable to recover it. 00:26:44.202 [2024-07-15 18:52:00.776471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.776481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.776750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.776762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.776987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.776997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.777248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.777258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.777445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.777455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.777621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.777631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.777888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.777917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.778119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.778149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.778392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.778423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.778697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.778707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.778959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.778968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.779144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.779154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.779408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.779440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.779643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.779672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.779969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.779998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.780245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.780276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.780588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.780618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.780907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.780936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.781140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.781169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.781436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.781446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.781693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.781703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.781876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.781886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.782069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.782079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.782266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.782277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.782438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.782448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.782627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.782637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.782890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.782919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.783132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.783161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.783405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.783436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.783627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.783637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.783862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.783872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.784034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.784044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.784207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.784248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.784452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.784481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.784722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.784751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.784991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.785021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.785246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.785277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.785546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.785556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.785750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.785760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.785921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.785931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.203 qpair failed and we were unable to recover it. 00:26:44.203 [2024-07-15 18:52:00.786198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.203 [2024-07-15 18:52:00.786235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.786485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.786519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.786685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.786715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.786920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.786950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.787112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.787141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.787372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.787403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.787572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.787601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.787839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.787869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.788140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.788170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.788479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.788489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.788737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.788747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.788991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.789001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.789267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.789278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.789439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.789449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.789630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.789640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.789761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.789771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.789956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.789985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.790276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.790307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.790576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.790605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.790877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.790907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.791258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.791289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.791484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.791494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.791752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.791781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.792003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.792032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.792253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.792263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.792521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.792531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.792706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.792716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.792884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.792913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.793191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.793220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.793533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.793563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.793774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.793802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.794039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.794069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.794287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.794318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.794491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.794520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.794723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.794752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.795076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.795106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.795278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.795309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.795607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.795637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.795906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.795934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.796174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.796204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.204 [2024-07-15 18:52:00.796507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.204 [2024-07-15 18:52:00.796517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.204 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.796763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.796775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.797056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.797066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.797305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.797315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.797474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.797485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.797646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.797656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.797827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.797837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.798079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.798109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.798334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.798365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.798631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.798641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.798867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.798877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.799048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.799058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.799246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.799276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.799566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.799595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.799807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.799837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.800070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.800099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.800258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.800269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.800467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.800477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.800637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.800647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.800941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.800971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.801241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.801271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.801584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.801614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.801902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.801932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.802157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.802187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.802498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.802509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.802698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.802708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.802891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.802921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.803133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.803162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.803499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.803530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.803798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.803827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.804028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.804057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.804300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.804331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.804540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.804570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.804774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.804802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.804954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.804983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.805235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.805267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.805483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.805512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.805797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.805826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.806097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.806127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.806421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.806452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.806689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.806717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.205 qpair failed and we were unable to recover it. 00:26:44.205 [2024-07-15 18:52:00.807020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.205 [2024-07-15 18:52:00.807060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.807285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.807316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.807586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.807616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.807917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.807947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.808191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.808220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.808450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.808482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.808655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.808665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.808937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.808966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.809190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.809219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.809509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.809539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.809836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.809865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.810083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.810112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.810347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.810379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.810604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.810614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.810866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.810886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.811132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.811142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.811315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.811325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.811581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.811610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.811838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.811867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.812103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.812133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.812396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.812427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.812757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.812786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.813076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.813106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.813414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.813444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.813709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.813719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.813917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.813926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.814120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.814130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.814310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.814321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.814577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.814607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.814817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.814847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.815090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.815120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.815338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.815369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.815640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.815670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.815873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.815902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.816140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.206 [2024-07-15 18:52:00.816170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.206 qpair failed and we were unable to recover it. 00:26:44.206 [2024-07-15 18:52:00.816473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.816504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.816659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.816689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.816961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.816971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.817134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.817144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.817425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.817457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.817749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.817785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.818101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.818131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.818430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.818461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.818731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.818761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.818994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.819024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.819323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.819354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.819572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.819582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.819761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.819771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.819963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.819993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.820270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.820301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.820634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.820664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.820957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.820987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.821284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.821314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.821595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.821604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.821785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.821795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.821968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.821978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.822153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.822189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.822499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.822529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.822815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.822845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.823142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.823171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.823334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.823364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.823658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.823687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.823936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.823946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.824168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.824178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.824433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.824444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.824618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.824628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.824888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.824917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.825218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.825257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.825550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.825579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.825900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.825929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.826212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.826250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.826484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.826495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.826764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.826774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.827000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.827010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.827210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.827219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.207 [2024-07-15 18:52:00.827338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.207 [2024-07-15 18:52:00.827348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.207 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.827601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.827611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.827806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.827816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.828016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.828026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.828191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.828201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.828432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.828469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.828724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.828753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.828998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.829027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.829341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.829381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.829637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.829647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.829862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.829872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.830129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.830139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.830412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.830422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.830697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.830707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.830887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.830897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.831174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.831183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.831428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.831459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.831726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.831756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.832036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.832065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.832316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.832348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.832645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.832675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.832944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.832973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.833295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.833325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.833548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.833578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.833866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.833875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.834047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.834057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.834288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.834298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.834404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.834414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.834688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.834698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.834950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.834959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.835135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.835145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.835324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.835334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.835538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.835548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.835798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.835807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.835983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.835993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.836180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.836210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.836494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.836525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.836754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.836783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.837006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.837034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.837326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.837365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.837527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.837536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.208 qpair failed and we were unable to recover it. 00:26:44.208 [2024-07-15 18:52:00.837769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.208 [2024-07-15 18:52:00.837798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.838067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.838096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.838431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.838462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.838761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.838791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.839060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.839094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.839388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.839419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.839690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.839719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.840035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.840065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.840333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.840364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.840649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.840679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.840956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.840966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.841192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.841202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.841370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.841381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.841617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.841626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.841854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.841864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.842152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.842181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.842538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.842569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.842810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.842840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.843097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.843107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.843221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.843237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.843518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.843547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.843765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.843794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.844015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.844044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.844335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.844366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.844536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.844566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.844844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.844854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.845081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.845090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.845343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.845353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.845536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.845546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.845745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.845755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.845867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.845877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.846108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.846118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.846325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.846335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.846588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.846618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.846903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.846933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.847207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.847245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.847490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.847519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.847815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.847845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.848141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.848171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.848481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.848526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.848837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.209 [2024-07-15 18:52:00.848869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.209 qpair failed and we were unable to recover it. 00:26:44.209 [2024-07-15 18:52:00.849127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.849157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.849404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.849435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.849735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.849746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.849913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.849925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.850029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.850042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.850297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.850307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.850483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.850492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.850763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.850774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.851039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.851050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.851232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.851243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.851509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.851539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.851757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.851787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.852063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.852073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.852259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.852269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.852450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.852460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.852656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.852665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.852892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.852903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.853102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.853114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.853371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.853382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.210 [2024-07-15 18:52:00.853574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.210 [2024-07-15 18:52:00.853585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.210 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.853819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.853831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.854108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.854120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.854302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.854313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.854598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.854628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.854804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.854834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.855011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.855040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.855279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.855289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.855468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.855479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.855676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.855707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.855899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.855929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.856194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.856233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.856488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.856518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.856690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.856719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.856971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.857001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.857164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.857194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.857422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.857452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.857658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.857688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.857927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.857958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.858204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.858251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.858457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.858487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.858688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.858699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.858931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.858960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.859263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.859295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.859582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.859609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.859779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.859789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.859966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.859976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.860243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.860277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.860543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.860574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.860900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.860910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.861091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.861101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.489 [2024-07-15 18:52:00.861303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.489 [2024-07-15 18:52:00.861314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.489 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.861544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.861555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.861741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.861752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.862034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.862065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.862281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.862312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.862516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.862527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.862691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.862701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.862914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.862944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.863244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.863276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.863555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.863567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.863742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.863752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.863985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.863995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.864178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.864188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.864309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.864320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.864483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.864493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.864695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.864725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.864946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.864976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.865262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.865293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.865610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.865641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.865921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.865931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.866109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.866120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.866333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.866345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.866589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.866913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.866947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.867237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.867270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.867569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.867600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.867896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.867908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.868130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.868141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.868400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.868411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.868589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.868600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.868788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.868798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.868904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.868914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.869080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.869090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.869307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.869321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.869490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.869501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.869732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.869763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.870001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.870031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.870333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.870370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.870664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.870696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.871004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.871013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-07-15 18:52:00.871234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-07-15 18:52:00.871246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.871447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.871457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.871648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.871658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.871792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.871804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.871975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.871986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.872194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.872240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.872526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.872559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.872733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.872763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.873010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.873021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.873270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.873282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.873460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.873471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.873660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.873691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.873990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.874021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.874294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.874325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.874560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.874590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.874796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.874806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.875048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.875078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.875301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.875333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.875559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.875571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.875677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.875688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.875947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.875958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.876222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.876238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.876475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.876504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.876828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.876859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.877135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.877164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.877435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.877465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.877787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.877816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.878096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.878125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.878269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.878300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.878612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.878622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.878791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.878801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.879093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.879123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.879445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.879475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.879737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.879750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.879907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.879917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.880178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.880207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.880439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.880469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.880708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.880737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.880944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.880973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.881253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.881285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.881601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-07-15 18:52:00.881630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-07-15 18:52:00.881925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.881955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.882260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.882291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.882534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.882563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.882779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.882808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.883106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.883135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.883378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.883408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.883617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.883627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.883855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.884165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.884175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.884339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.884349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.884534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.884563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.884835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.884864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.885143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.885173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.885484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.885514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.885830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.885860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.886087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.886117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.886410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.886441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.886726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.886736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.886978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.886988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.887288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.887298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.887478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.887488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.887666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.887676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.887883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.887913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.888132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.888161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.888368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.888398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.888705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.888715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.888964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.888974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.889215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.889231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.889485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.889495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.889698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.889708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.889907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.889917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.890149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.890159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.890406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.890417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.890698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.890707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.890982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.890992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.891203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.891214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.891394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.891405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.891595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.891605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.891858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.891868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.892070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.892080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-07-15 18:52:00.892357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-07-15 18:52:00.892367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.892565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.892575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.892803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.892813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.892993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.893002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.893263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.893294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.893565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.893595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.893907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.893918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.894170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.894180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.894277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.894286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.894512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.894522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.894697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.894707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.894813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.894823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.895070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.895081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.895306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.895316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.895543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.895553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.895728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.895737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.895918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.895927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.896170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.896180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.896340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.896350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.896564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.896576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.896770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.896780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.897030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.897059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.897356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.897387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.897659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.897688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.898006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.898035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.898268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.898299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.898521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.898551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.898766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.898796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.899091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.899101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.899300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.899310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.899471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.899481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.899656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.899682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.899914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.899944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-07-15 18:52:00.900249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-07-15 18:52:00.900280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.900575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.900604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.900909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.900938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.901209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.901246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.901541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.901571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.901861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.901871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.902099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.902109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.902366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.902377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.902553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.902563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.902738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.902768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.903089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.903118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.903331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.903361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.903627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.903637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.903890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.903899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.904126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.904136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.904396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.904407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.904635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.904645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.904819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.904829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.905070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.905099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.905316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.905347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.905616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.905645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.905864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.905894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.906178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.906207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.906520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.906552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.906834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.906844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.906956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.906967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.907196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.907207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.907406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.907416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.907679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.907709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.907865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.907894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.908052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.908081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.908351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.908382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.908655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.909005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.909014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.909267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.909277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.909450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.909460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.909696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.909705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.909930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.909940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.910235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.910245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.910498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.910507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.910738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-07-15 18:52:00.910748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-07-15 18:52:00.911006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.911016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.911291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.911301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.911557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.911567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.911772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.911782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.911983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.911993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.912238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.912249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.912501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.912510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.912712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.912722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.912947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.912956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.913126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.913136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.913312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.913322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.913593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.913603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.913853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.913863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.914059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.914069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.914253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.914263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.914466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.914476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.914667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.914677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.914929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.914939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.915194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.915203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.915388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.915398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.915528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.915538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.915658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.915668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.915942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.915951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.916156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.916166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.916280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.916291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.916496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.916508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.916755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.916765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.916942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.916952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.917132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.917141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.917304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.917314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.917588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.917598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.917773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.917782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.918014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.918024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.918261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.918271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.918384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.918394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.918507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.918517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.918677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.918686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.918793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.918802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.919039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.919049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.919211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.919221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-07-15 18:52:00.919446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-07-15 18:52:00.919456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.919705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.919714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.919885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.919895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.920058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.920069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.920266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.920276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.920398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.920408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.920665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.920675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.920969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.920979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.921233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.921243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.921469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.921479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.921733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.921743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.921938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.921948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.922114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.922123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.922396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.922406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.922581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.922591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.922775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.922785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.922978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.922988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.923117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.923126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.923378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.923388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.923655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.923665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.923896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.923905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.924015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.924025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.924210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.924220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.924452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.924462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.924714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.924724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.925012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.925024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.925188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.925198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.925376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.925386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.925617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.925626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.925899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.925909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.926154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.926164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.926414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.926424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.926599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.926609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.926859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.926869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.927046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.927057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.927169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.927179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.927405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.927415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.927662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.927672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.927871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.927880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.928056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.928066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.928296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.928307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-07-15 18:52:00.928538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-07-15 18:52:00.928547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.928831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.928841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.929070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.929079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.929326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.929336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.929567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.929577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.929772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.929782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.930032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.930041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.930286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.930296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.930458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.930467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.930696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.930706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.930902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.930912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.931092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.931333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.931344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.931593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.931603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.931828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.931838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.932076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.932086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.932361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.932371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.932557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.932567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.932793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.932803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.933055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.933064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.933250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.933261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.933438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.933448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.933623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.933633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.933884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.933894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.934001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.934014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.934241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.934251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.934425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.934435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.934611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.934621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.934893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.934903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.935063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.935072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.935319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.935329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.935558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.935568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.935808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.935818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.936087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.936097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.936267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-07-15 18:52:00.936278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-07-15 18:52:00.936453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.936462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.936567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.936577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.936704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.936714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.936883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.936893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.937089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.937099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.937285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.937296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.937453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.937463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.937711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.937721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.937968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.937977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.938232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.938242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.938468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.938478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.938649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.938659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.938884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.938894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.939127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.939137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.939416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.939427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.939666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.939676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.939851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.939861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.940063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.940073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.940272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.940282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.940549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.940559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.940835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.940845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.941048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.941058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.941237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.941247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.941498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.941508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.941761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.941771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.942005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.942015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.942276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.942286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.942486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.942497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.942794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.942804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.943058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.943069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.943346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.943357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.943483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.943493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.943722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.943732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.943912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.943921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.944181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.944191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.944403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.944413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.944681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.944691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.944854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.944863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.945100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.945110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.945341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.945351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 18:52:00.945538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 18:52:00.945548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.945772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.945781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.945944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.945953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.946181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.946191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.946362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.946372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.946608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.946617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.946793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.946802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.947038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.947048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.947259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.947269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.947437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.947447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.947700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.947710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.947870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.947879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.948002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.948012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.948266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.948276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.948482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.948491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.948719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.948729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.948986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.948996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.949172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.949183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.949382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.949393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.949656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.949666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.949950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.949960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.950163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.950172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.950337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.950348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.950550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.950560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.950811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.950821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.951123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.951132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.951377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.951387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.951611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.951621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.951890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.951900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.952153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.952165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.952451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.952461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.952631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.952641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.952890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.952900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.953129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.953139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.953335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.953346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.953506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.953516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.953767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.953777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.954027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.954037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.954289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.954300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.954470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.954480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.954731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 18:52:00.954741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 18:52:00.954990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.954999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.955161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.955171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.955377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.955387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.955567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.955577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.955829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.955838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.956091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.956101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.956342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.956352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.956529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.956539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.956701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.956711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.956899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.956909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.957179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.957189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.957351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.957361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.957542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.957552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.957805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.957815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.958041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.958051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.958280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.958291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.958568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.958578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.958819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.958828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.959078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.959088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.959258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.959268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.959470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.959480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.959733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.959743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.960024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.960034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.960240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.960250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.960475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.960485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.960642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.960652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.960901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.960911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.961126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.961136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.961409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.961421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.961601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.961611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.961719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.961729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.961911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.961921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.962102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.962112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.962286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.962296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.962526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.962536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.962706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.962716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.962989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.962999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.963248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.963258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.963376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.963385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.963559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.963569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.963828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 18:52:00.963838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 18:52:00.964092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.964101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.964332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.964342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.964570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.964580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.964711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.964721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.964901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.964911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.965078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.965088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.965336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.965346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.965521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.965531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.965786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.965796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.965955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.965965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.966190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.966200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.966439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.966449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.966638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.966648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.966902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.966912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.967161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.967171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.967290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.967301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.967500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.967509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.967807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.967816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.968095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.968105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.968299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.968310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.968585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.968595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.968843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.968853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.969077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.969087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.969194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.969204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.969487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.969498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.969676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.969686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.969865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.969875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.970046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.970058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.970327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.970337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.970512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.970522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.970776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.970786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.970956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.970965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.971191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.971200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.971437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.971448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.971619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.971629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.971880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.971889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.972063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.972073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.972300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.972311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.972559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.972568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.972665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.972674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 18:52:00.972875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 18:52:00.972885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.973061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.973072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.973243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.973253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.973461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.973471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.973697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.973706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.973822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.973831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.974008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.974018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.974259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.974270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.974523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.974533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.974643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.974653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.974879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.974889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.974985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.974994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.975153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.975163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.975341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.975351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.975515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.975525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.975750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.975760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.975853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.975862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.976088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.976098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.976348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.976358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.976536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.976546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.976750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.976760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.976929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.976939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.977053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.977062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.977260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.977270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.977520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.977529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.977711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.977721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.977974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.977984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.978233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.978245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.978489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.978499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.978749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.978759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.978984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.978994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.979155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.979165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.979348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.979358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.979546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.979556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.979661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 18:52:00.979671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 18:52:00.979829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.979839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.979953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.979963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.980211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.980221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.980423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.980433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.980708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.980718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.980982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.980992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.981267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.981278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.981504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.981514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.981676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.981685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.981912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.981921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.982165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.982175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.982443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.982453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.982616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.982626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.982791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.982800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.982981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.982991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.983238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.983249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.983476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.983486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.983681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.983691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.983855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.983864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.984047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.984057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.984332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.984342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.984568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.984578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.984753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.984763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.984992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.985002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.985185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.985195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.985410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.985420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.985649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.985658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.985839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.985849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.986097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.986107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.986360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.986370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.986616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.986626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.986875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.986885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.987137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.987148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.987333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.987343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.987592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.987602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.987763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.987773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.988025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.988035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.988206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.988215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.988471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.988482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.988685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.988695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.988987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 18:52:00.988997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 18:52:00.989245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.989255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.989483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.989493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.989750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.989760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.990018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.990028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.990267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.990277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.990508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.990518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.990783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.990792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.991045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.991055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.991297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.991307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.991531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.991541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.991782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.991792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.992062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.992072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.992299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.992309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.992533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.992542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.992818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.992828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.992992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.993002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.993197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.993206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.993379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.993390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.993573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.993583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.993778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.993788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.994037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.994047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.994249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.994259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.994486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.994496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.994748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.994758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.994939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.994949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.995123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.995133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.995408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.995418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.995680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.995690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.995949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.995958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.996211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.996221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.996384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.996394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.996640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.996651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.996878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.996887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.997060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.997070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.997297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.997307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.997537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.997546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.997753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.997762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.997992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.998003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.998110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.998120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.998242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 18:52:00.998252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 18:52:00.998481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.998491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:00.998653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.998663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:00.998901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.998911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:00.999088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.999098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:00.999324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.999335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:00.999517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.999526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:00.999717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.999727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:00.999847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:00.999857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.000032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.000042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.000287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.000299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.000473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.000482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.000642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.000652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.000874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.000885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.001137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.001147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.001320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.001330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.001512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.001522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.001794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.001804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.001972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.001982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.002161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.002171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.002279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.002289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.002468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.002478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.002731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.002742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.002990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.003002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.003248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.003258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.003480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.003490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.003682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.003692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.003981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.003991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.004262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.004274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.004469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.004479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.004715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.004726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.004970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.004980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.005208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.005218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.005407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.005417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.005578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.005588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.005699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.005709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.005977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.005987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.006244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.006255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.006492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.006502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.006701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.006711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.006874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.006884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 18:52:01.007068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 18:52:01.007078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.007211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.007222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.007452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.007463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.007693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.007703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.007907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.007920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.008221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.008236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.008476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.008486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.008664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.008675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.008929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.008941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.009069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.009080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.009269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.009280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.009546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.009556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.009733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.009743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.009920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.009930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.010036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.010047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.010155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.010166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.010361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.010372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.010486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.010495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.010619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.010632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.010824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.010835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.010994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.011003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.011295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.011305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.011511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.011520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.011683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.011693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.011865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.011875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.012109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.012119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.012348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.012359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.012476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.012486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.012697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.012707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.012824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.012834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.013081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.013090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.013265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.013275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.013438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.013449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.013736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.013746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.013872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 18:52:01.013882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 18:52:01.014137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.014146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.014261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.014271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.014501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.014512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.014684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.014693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.014872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.014882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.015127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.015137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.015260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.015270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.015455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.015465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.015695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.015705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.015892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.015902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.016151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.016161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.016395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.016405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.016567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.016577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.016704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.016713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.016838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.016848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.017077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.017087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.017333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.017343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.017516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.017526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.017751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.017760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.017921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.017931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.018200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.018210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.018402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.018412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.018684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.018694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.018903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.018915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.019025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.019035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.019332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.019342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.019590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.019600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.019810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.019820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.020029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.020039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.020265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.020275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.020502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.020512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.020626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.020636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.020844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.020854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.020947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.020956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.021126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.021136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.021404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.021415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.021686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.021696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.021821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.021831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.022062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.022072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.022249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.022260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 18:52:01.022486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 18:52:01.022496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.022725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.022735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.023049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.023058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.023167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.023177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.023298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.023309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.023486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.023495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.023666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.023676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.023943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.023953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.024133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.024143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.024379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.024389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.024562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.024572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.024734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.024744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.024982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.024992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.025281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.025292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.025469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.025479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.025657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.025667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.025903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.025912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.026160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.026170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.026361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.026371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.026566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.026576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.026749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.026759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.027014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.027024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.027184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.027194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.027433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.027445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.027573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.027583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.027763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.027773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.027951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.027961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.028162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.028172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.028281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.028290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.028491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.028501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.028679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.028689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.028808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.028817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.028994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.029003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.029263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.029273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.029393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.029403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.029564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.029574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.029827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.029837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.030014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.030024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.030208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.030218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.030485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.030495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.030672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.030682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 18:52:01.030860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 18:52:01.030870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.031033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.031043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.031137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.031146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.031394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.031405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.031585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.031595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.031845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.031854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.032123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.032133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.032253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.032263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.032512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.032522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.032780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.032789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.033069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.033079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.033243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.033253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.033503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.033513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.033708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.033719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.033946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.033956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.034083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.034092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.034279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.034289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.034452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.034462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.034623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.034633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.034799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.034809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.034998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.035008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.035182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.035191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.035372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.035384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.035640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.035649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.035896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.035905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.036156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.036165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.036326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.036336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.036521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.036530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.036663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.036673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.036849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.036859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.037114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.037124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.037374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.037384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.037630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.037640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.037806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.037816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.038077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.038087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.038367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.038377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.038636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.038646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.038824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.038833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.039024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.039034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.039209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.039218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.039400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 18:52:01.039410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 18:52:01.039636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.039646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.039891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.039900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.040065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.040074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.040292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.040302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.040550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.040560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.040838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.040848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.041094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.041104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.041377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.041387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.041598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.041608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.041791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.041801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.041963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.041974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.042170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.042179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.042350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.042360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.042564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.042573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.042828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.042838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.043083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.043093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.043345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.043355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.043535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.043545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.043799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.043809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.044055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.044065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.044255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.044265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.044511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.044523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.044700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.044710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.044890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.044899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.045126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.045135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.045416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.045426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.045665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.045675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.045854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.045863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.046120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.046130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.046305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.046316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.046567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.046577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.046757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.046766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.046950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.046960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.047124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.047134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.047400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 18:52:01.047410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 18:52:01.047585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.047595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.047770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.047780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.047978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.047987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.048215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.048228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.048428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.048437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.048663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.048673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.048929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.048939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.049113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.049123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.049375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.049385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.049634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.049645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.049808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.049818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.050017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.050026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.050205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.050215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.050454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.050464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.050578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.050587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.050768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.050778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.050940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.050950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.051197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.051236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.051536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.051566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.051786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.051816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.052029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.052058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.052293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.052303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.052558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.052588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.052794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.052823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.052994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.053023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.053233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.053264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.053577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.053613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.053827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.053857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.053986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.053995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.054268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.054300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.054470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.054500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.054789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.054818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.055039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.055069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.055358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.055390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.055604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.055634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.055905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.055934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.056159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.056169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.056398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.056408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.056657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.056689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.056912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.056942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 18:52:01.057217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 18:52:01.057256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.057557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.057588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.057892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.057902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.058162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.058192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.058480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.058509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.058723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.058753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.059063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.059073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.059323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.059334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.059550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.059560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.059721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.059731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.059992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.060002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.060266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.060276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.060565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.060594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.060842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.060873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.061168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.061197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.061520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.061551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.061838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.061868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.062139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.062169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.062367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.062377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.062574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.062584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.062761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.062771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.063007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.063036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.063285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.063317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.063520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.063530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.063737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.063766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.064066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.064097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.064396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.064419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.064718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.064747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.064976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.065005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.065202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.065211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.065387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.065397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.065556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.065566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.065785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.065814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.066104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.066134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.066403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.066434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.066755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.066785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.067082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.067112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.067439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.067471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.067720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.067750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.067955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.067985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.068267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.068299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 18:52:01.068619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 18:52:01.068649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.068871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.068901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.069180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.069204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.069377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.069407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.069580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.069610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.069931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.069961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.070188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.070218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.070399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.070430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.070584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.070615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.070827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.070857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.071117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.071147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.071464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.071496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.071721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.071750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.072068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.072098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.072317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.072349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.072575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.072585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.072795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.072826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.073108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.073138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.073462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.073473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.073720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.073767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.073989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.074019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.074242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.074273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.074589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.074618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.074849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.074879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.075097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.075127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.075436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.075476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.075652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.075664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.075813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.075824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.075964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.075974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.076148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.076160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.076420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.076455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.076679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.076714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.076970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.077004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.077211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.077222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.077361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.077371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.077536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.077546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.077671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.077703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.078012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.078043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.078275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.078309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.078630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.078662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.078940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 18:52:01.078971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 18:52:01.079155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.079166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.079348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.079359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.079585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.079596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.079731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.079765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.079974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.080006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.080305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.080334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.080511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.080522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.080700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.080711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.080942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.080952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.081159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.081193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.081494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.081529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.081753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.081763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.081931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.081945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.082156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.082186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.082477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.082516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.082817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.082848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.082996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.083026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.083269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.083310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.083521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.083532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.083762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.083772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.083983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.083994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.084172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.084183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.084437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.084474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.084752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.084790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.085006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.085047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.085297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.085308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.085541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.085553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.085793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.085832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.085987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.086017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.086165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.086196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.086414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.086445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.086759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.086790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.087075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.087105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.087404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.087435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.087665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.087696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.088003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.088034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.088337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.088369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.088666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.088697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.088914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 18:52:01.088944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 18:52:01.089151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.089181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.089487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.089518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.089756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.089786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.090065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.090095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.090327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.090359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.090586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.090616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.090832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.090862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.091158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.091188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.091492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.091525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.091772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.091802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.092016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.092047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.092269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.092301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.092613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.092645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.092942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.092973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.093125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.093156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.093470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.093500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.093788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.093818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.094104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.094114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.094234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.094244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.094494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.094524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.094752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.094782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.094985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.094996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.095181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.095191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.095455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.095466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.095648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.095658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.095848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.095883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.096047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.096058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.096238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.096249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.096452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.096463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.096652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.096674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.096922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.096952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.097172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.097202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.097418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.097449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.097738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.097768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.097992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.098023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.098292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.098303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 18:52:01.098412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 18:52:01.098443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.098715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.098745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.099019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.099049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.099220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.099276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.099485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.099514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.099721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.099753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.100024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.100055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.100360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.100393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.100601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.100631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.100934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.100964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.101203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.101243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.101538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.101569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.101740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.101770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.101998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.102028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.102246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.102273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.102451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.102461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.102592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.102622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.102784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.102814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.102970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.103000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.103207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.103248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.103481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.103512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.103738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.103768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.103925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.103956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.104238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.104266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.104442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.104453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.104627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.104637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.104805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.104815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.104946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.104957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.105217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.105257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.105402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.105438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.105645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.105676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.105899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.105929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.106151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.106162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.106359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.106371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.106594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.106604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.106725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.106736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.106857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.106868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.106979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.106990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.107242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.107281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.107424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.107455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.107676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 18:52:01.107706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 18:52:01.107864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.107894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.108169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.108199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.108444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.108456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.108605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.108635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.108795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.108826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.108994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.109024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.109349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.109381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.109550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.109580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.109901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.109930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.110179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.110210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.110378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.110409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.110698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.110709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.110895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.110905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.111068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.111078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.111209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.111258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.111513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.111544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.111762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.111793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.111998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.112028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.112325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.112336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.112461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.112471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.112642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.112671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.112837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.112867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.113083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.113112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.113234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.113245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.113342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.113352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.113502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.113512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.113606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.113645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.113921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.113952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.114159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.114209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.114393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.114404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.114572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.114602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.114823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.114853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.115073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.115103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.115320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.115330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.115564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.115595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.115887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.115917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.116169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.116199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.116424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.116454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.116676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.116706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.116928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.116957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.117191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 18:52:01.117201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 18:52:01.117298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.117308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.117434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.117444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.117564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.117574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.117656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.117665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.117847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.117877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.118032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.118062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.118271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.118301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.118569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.118580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.118751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.118761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.118866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.118876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.118990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.119000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.119264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.119295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.119449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.119480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.119641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.119670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.119963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.119993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.120212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.120222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.120451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.120461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.120653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.120663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.120919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.120949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.121151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.121181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.121398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.121430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.121671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.121700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.121875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.121905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.122054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.122084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.122210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.122220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.122315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.122325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.122502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.122512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.122747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.122777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.122989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.123019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.123294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.123325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.123549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.123580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.123785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.123815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.124056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.124085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.124307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.124317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.124507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.124537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.124741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.124771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.125009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.125039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.125195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.125205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.125507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.125517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.125640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.125670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 18:52:01.125830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 18:52:01.125859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.126075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.126105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.126393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.126404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.126634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.126644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.126824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.126834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.127012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.127022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.127142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.127153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.127315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.127326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.127578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.127607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.127872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.127902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.128174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.128204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.128467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.128497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.128767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.128797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.129070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.129100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.129425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.129460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.129754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.129784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.130022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.130051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.130368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.130378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.130498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.130509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.130705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.130735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.131034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.131064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.131334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.131345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.131576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.131606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.131822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.131851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.132125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.132155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.132453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.132483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.132755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.132785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.133114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.133144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.133368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.133400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.133624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.133655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.133978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.134007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.134314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.134345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.134635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.134665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.134891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.134920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.135213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.135263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 18:52:01.135444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 18:52:01.135474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.135641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.135651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.135849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.135878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.136151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.136187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.136440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.136450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.136727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.136757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.137017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.137046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.137334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.137345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.137592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.137602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.137771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.137801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.138023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.138052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.138361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.138393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.138559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.138589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.138797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.138827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.139145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.139175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.139504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.139515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.139788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.139798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.140086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.140115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.140436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.140468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.140749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.140788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.141065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.141094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.141309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.141340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.141620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.141630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.141854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.141883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.142156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.142186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.142466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.142477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.142666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.142676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.142913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.142943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.143241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.143251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.143414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.143423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.143610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.143620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.143875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.143905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.144113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.144143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.144422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.144454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.144677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.144707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.145005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.145035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.145337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.145367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.145602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.145612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.145784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.145814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.146055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.146084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.146301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 18:52:01.146312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 18:52:01.146576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.146606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.146833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.146862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.147140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.147170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.147383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.147394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.147630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.147659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.147926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.147955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.148278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.148310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.148575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.148605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.148905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.148935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.149222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.149261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.149508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.149544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.149681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.149691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.149921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.149931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.150101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.150131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.150271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.150303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.150569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.150598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.150923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.150953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.151176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.151207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.151499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.151512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.151761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.151792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.152010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.152040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.152258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.152290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.152453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.152464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.152626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.152656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.152935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.152965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.153118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.153148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.153313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.153323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.153494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.153523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.153733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.153762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.154058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.154097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.154331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.154342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.154487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.154517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.154728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.154757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.155054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.155084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.155379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.155389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.155565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.155595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.155822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.155852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.156148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.156177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.156471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.156502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.156706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.156736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 18:52:01.156896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 18:52:01.156925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.157216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.157256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.157491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.157521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.157828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.157858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.157999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.158028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.158209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.158250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.158505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.158515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.158804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.158834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.159036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.159066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.159288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.159319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.159541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.159551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.159674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.159703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.159905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.159933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.160164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.160193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.160392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.160401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.160573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.160602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.160897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.160926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.161159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.161188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.161489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.161527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.161811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.161822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.162013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.162023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.162209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.162248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.162479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.162508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.162777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.162807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.163082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.163118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.163315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.163325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.163494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.163504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.163761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.163792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.164092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.164121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.164398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.164409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.164640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.164650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.164897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.164907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.165165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.165175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.165335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.165345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.165608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.165637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.165876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.165905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.166126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.166155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.166404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.166435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.166644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.166674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.167000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.167029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.167329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 18:52:01.167360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 18:52:01.167566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.167595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.167870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.167899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.168124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.168154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.168407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.168439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.168650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.168680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.168980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.169010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.169160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.169190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.169437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.169468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.169610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.169620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.169852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.169862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.170119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.170148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.170352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.170383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.170605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.170634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.170959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.170993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.171236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.171271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.171505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.171534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.171760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.171770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.172035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.172070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.172393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.172425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.172638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.172649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.172903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.172932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.173215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.173256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.173439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.173450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.173641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.173670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.173832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.173861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.174187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.174216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.174485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.174515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.174813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.174843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.175089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.175120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.175383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.175393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.175546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.175576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.175831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.175861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.176141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.176177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 18:52:01.176359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 18:52:01.176369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.176483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.176514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.176737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.176768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.177070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.177100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.177374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.177407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.177638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.177648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.177838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.177848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.178104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.178133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.178346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.178376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.178642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.178652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.178860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.178870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.179045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.179075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.179300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.179331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.179609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.179641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.179863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.179893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.180102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.180133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.180391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.180424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.180694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.180705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.180965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.180975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.181151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.181161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.181443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.181476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.181766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.181797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.182123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.182153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.182365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.182397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.182671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.182706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.182933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.182963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.183260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.183291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.183592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.183622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.183920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.183950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.184222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.184349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.184537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.184567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.184797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.184827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.185123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.185153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.185359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.185391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.185686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.185716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.186042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.186073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-07-15 18:52:01.186366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-07-15 18:52:01.186397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.186640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.186669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.186899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.186928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.187079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.187119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.187301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.187311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.187441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.187452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.187688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.187697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.187974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.188004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.188301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.188332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.188599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.188609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.188890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.188899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.189144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.189167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.189483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.189514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.189811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.189841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.190146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.190175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.190489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.190520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.190820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.190850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.191083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.191113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.191312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.191323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.191590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.191619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.191889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.191920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.192242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.192273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.192550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.192560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.192835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.192865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.193178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.193208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.193498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.193528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.193698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.193728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.194031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.194060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.194302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.194318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.194517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.194528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.194757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.194766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.195003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.195013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.195246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.195279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.195573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.195602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.195905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.195935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.196258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.196290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.196569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.196598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.196913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.196943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.197250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.197281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.197577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-07-15 18:52:01.197607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-07-15 18:52:01.197831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.197861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.198140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.198169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.198499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.198521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.198752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.198762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.198993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.199003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.199191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.199201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.199398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.199409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.199681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.199711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.200057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.200087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.200343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.200374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.200674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.200704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.200922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.200951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.201159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.201188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.201496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.201528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.201842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.201872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.202149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.202179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.202480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.202511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.202811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.202841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.203144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.203174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.203466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.203477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.203731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.203773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.204074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.204104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.204390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.204400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.204555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.204584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.204880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.204910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.205148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.205177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.205478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.205509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.205714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.205724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.205976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.205988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.206219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.206232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.206493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.206523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.206820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.206850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.207094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.207124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.207402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.207433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.207701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.207711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.207967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.207997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.208204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.208266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.208534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.208545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.208801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.208811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.208989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.208999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-07-15 18:52:01.209257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-07-15 18:52:01.209288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.209440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.209451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.209688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.209718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.210029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.210059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.210344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.210355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.210622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.210658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.210898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.210927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.211170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.211200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.211443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.211474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.211788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.211818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.212101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.212131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.212355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.212366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.212536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.212566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.212886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.212916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.213199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.213239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.213515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.213525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.213788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.213817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.214117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.214146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.214466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.214477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.214718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.214748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.215070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.215100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.215404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.215434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.215735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.215765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.216065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.216095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.216309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.216340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.216580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.216610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.216814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.216825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.217089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.217119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.217395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.217432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.217729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.217739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.217961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.217990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.218268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.218299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.218614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.218644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.218933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.218962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.219252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.219284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.219529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.219539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.219789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.219818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.220122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.220152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.220452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.220482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.220708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.220738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-07-15 18:52:01.221024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-07-15 18:52:01.221054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.221211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.221250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.221548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.221579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.221870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.221899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.222075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.222105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.222335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.222367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.222580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.222610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.222912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.222942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.223217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.223254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.223578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.223608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.223816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.223845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.224119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.224149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.224451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.224482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.224796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.224825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.225041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.225072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.225382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.225413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.225712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.225742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.226046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.226076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.226376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.226407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.226695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.226706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.226966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.226996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.227282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.227312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.227584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.227594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.227768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.227779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.228022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.228052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.228269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.228300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.228526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.228556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.228855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.228885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.229186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.229221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.229505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.229535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.229754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.229783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.230078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.230108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.230405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.230437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.230741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.230771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-07-15 18:52:01.230992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-07-15 18:52:01.231022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.231297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.231328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.231611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.231641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.231869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.231879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.232065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.232095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.232313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.232344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.232640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.232650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.232869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.232879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.233067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.233077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.233313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.233344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.233571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.233601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.233750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.233780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.234013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.234043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.234356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.234387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.234702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.234731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.234887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.234917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.235201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.235239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.235533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.235544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.235793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.235802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.236080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.236110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.236354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.236384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.236676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.236686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.236940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.236982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.237277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.237308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.237516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.237546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.237862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.237892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.238178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.238208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.238524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.238555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.238763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.238773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.239037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.239067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.239366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.239397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.239630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.239640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.239825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.239834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.240018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.240028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.240211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.240222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.240476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.240486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.240774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.240803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.241016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.241045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.241355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.241365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.241654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.241684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-07-15 18:52:01.241964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-07-15 18:52:01.241994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.242241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.242278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.242453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.242463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.242652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.242662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.242900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.242930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.243135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.243165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.243456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.243487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.243793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.243803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.243921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.243931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.244120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.244150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.244441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.244472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.244763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.244784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.244964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.244974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.245167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.245197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.245504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.245535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.245846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.245876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.246165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.246194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.246533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.246602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.246912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.246927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.247205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.247250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.247485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.247516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.247815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.247848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.248149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.248179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.248478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.248509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.248746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.248775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.248994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.249023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.249297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.249339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.249519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.249529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.249788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.249818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.250052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.250082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.250361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.250392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.250713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.250743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.250908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.250938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.251142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.251172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.251476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.251488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.251744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.251754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.252054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.252083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.252402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.252434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.252652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.252681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.253003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.253033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.253337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-07-15 18:52:01.253368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-07-15 18:52:01.253664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.253694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.253998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.254028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.254302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.254333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.254538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.254568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.254811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.254841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.255111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.255140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.255459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.255491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.255770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.255801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.256040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.256070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.256344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.256380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.256611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.256621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.256873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.256909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.257203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.257240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.257539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.257568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.257798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.257828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.258128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.258158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.258381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.258412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.258632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.258663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.258927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.258937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.259109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.259119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.259235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.259245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.259453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.259483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.259759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.259789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.260009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.260039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.260257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.260289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.260577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.260587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.260749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.260759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.260949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.260959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.261138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.261168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.261377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.261408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.261728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.261758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.261972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.262002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.262302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.262333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.262631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.262667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.262966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.262996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.263292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.263323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.263621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.263651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.263916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.263925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.264159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.264189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.264496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.264528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.264729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-07-15 18:52:01.264740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-07-15 18:52:01.264926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.264956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.265186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.265216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.265521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.265551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.265845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.265875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.266148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.266177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.266444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.266475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.266799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.266828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.267055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.267085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.267383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.267414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.267681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.267691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.267920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.267930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.268215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.268253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.268469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.268499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.268807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.268845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.269022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.269031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.269216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.269229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.269477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.269487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.269746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.269776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.270046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.270076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.270366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.270402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.270566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.270596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.270820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.270849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.271155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.271185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.271407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.271438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.271722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.271752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.272049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.272059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.272192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.272221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.272447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.272478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.272691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.272721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.273040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.273070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.273368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.273400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.273703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.273733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.274029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.274059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.274368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.274400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-07-15 18:52:01.274691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-07-15 18:52:01.274720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.275014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.275043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.275346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.275377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.275604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.275614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.275802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.275832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.276035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.276065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.276384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.276415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.276643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.276673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.276811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.276852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.277032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.277042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.277304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.277314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.277570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.277580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.277799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.277809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.278082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.278112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.278390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.278421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.278694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.278723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.278966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.278996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.279296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.279327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.279631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.279662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.279900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.279930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.280159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.280189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.280417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.280447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.280745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.280775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.281019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.281050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.281342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.281373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.281651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.281686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.282006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.282036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.282319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.282350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.282591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.282633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.282864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.282873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.283044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.283054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.283241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.283272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.283564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.283594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.283919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.283948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.284238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.284269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.284549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.284579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.284886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.284915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.285215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.285256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.285546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.285576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.285745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.285774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.286079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-07-15 18:52:01.286109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-07-15 18:52:01.286405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.286437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.286741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.286771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.286994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.287004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.287267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.287298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.287536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.287545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.287844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.287874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.288099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.288129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.288342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.288373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.288690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.288720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.288929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.288958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.289244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.289274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.289554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.289583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.289891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.289921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.290138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.290167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.290404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.290435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.290734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.290764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.291034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.291044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.291338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.291349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.291582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.291592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.291822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.291832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.292037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.292047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.292223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.292271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.292560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.292591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.292800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.292830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.293055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.293090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.293396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.293427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.293734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.293764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.294054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.294064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.294295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.294306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.294564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.294594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.294893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.294923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.295195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.295235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.295513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.295543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.295849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.295879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.296176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.296206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.296454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.296484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.296801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.296831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.297151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.297180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.297492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.297524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.297848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.297878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-07-15 18:52:01.298157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-07-15 18:52:01.298188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.298406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.298436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.298715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.298751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.298972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.299001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.299246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.299278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.299572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.299602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.299908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.299938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.300148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.300178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.300468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.300500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.300728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.300758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.301035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.301065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.301376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.301408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.301608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.301618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.301849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.301859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.302026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.302047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.302296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.302327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.302601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.302612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.302867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.302877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.303156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.303185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.303447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.303477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.303687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.303698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.303957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.303987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.304286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.304318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.304618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.304647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.304855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.304889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.305131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.305162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.305442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.305473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.305693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.305703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.305882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.305893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.306154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.306182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.306403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.306434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.306670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.306700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.306998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.307019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.307213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.307223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.307432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.307442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.307638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.307667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.307998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.308027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.308335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.308367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.308658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.308668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.308855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.308866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.309051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-07-15 18:52:01.309080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-07-15 18:52:01.309331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.309362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.309562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.309573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.309762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.309792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.310098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.310127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.310418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.310449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.310677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.310707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.310944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.310974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.311278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.311309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.311546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.311556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.311794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.311824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.312069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.312099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.312389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.312432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.312735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.312765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.313064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.313074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.313382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.313414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.313729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.313759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.314008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.314037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.314380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.314411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.314652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.314683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.314983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.314993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.315162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.315172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.315405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.315416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.315679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.315709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.316012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.316048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.316343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.316375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.316700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.316710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.316985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.316996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.317268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.317299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.317442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.317472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.317774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.317804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.318098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.318108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.318316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.318327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.318449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.318459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.318644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.318654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.318892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.318903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.319080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.319090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.319328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.319339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.319603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.319614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.319866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-07-15 18:52:01.319876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-07-15 18:52:01.320060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.320089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.320248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.320281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.320586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.320616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.320936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.320965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.321256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.321287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.321595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.321625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.321924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.321954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.322282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.322313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.322610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.322640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.322958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.322988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.323277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.323308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.323613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.323643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.323943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.323973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.324279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.324310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.324457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.324487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.324689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.324699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.324973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.325003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.325244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.325275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.325577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.325611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.325898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.325908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.326175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.326185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.326366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.326377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.326644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.326674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.326924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.326955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.327261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.327296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.327527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.327557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.327876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.327907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.328117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.328147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.328425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.328457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.328701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.328730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.328926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.328936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.329131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.329161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.329389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.329420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.329651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.329681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.329912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.329943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.330155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.330185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.330412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-07-15 18:52:01.330443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-07-15 18:52:01.330748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.330778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.330997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.331028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.331255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.331285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.331501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.331531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.331852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.331882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.332133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.332163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.332471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.332502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.332822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.332851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.333133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.333164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.333476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.333507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.333738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.333778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.334040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.334082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.334343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.334375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.334698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.334728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.334960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.334991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.335217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.335257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.335550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.335581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.335880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.335909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.336136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.336166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.336480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.336511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.336799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.336830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.337057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.337087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.337392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.337424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.337731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.337761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.337949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.337979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.338195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.338233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.338537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.338568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.338860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.338895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.339207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.339246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.339461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.339491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.339734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.339765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.339996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.340006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.340257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.340288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.340578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.340608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.340893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.340903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.341160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.341200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.341560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.341590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.341832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.341862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.342160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.342190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.342503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.342534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.342702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-07-15 18:52:01.342732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-07-15 18:52:01.343023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.343053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.343284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.343315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.343549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.343579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.343791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.343801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.344016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.344045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.344325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.344356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.344579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.344609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.344839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.344869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.345172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.345182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.345350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.345361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.345573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.345602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.345825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.345854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.346151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.346180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.346489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.346521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.346816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.346846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.347170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.347200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.347531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.347561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.347872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.347902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.348215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.348257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.348561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.348591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.348875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.348905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.349136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.349166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.349477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.349508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.349807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.349837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.350053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.350083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.350397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.350429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.350718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.350752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.351059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.351069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.351331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.351362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.351695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.351736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.352009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.352039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.352356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.352387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.352695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.352725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.353022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.353052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.353304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.353334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.353565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.353595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.353868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.353879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.354115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.354125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.354311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.354322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.354532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.354543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-07-15 18:52:01.354753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-07-15 18:52:01.354784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.355036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.355066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.355371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.355403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.355725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.355755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.356039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.356069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.356379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.356410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.356728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.356758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.356991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.357030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.357270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.357281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.357583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.357613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.357935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.357966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.358220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.358262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.358505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.358535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.358763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.358794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.359023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.359053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.359301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.359333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.359625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.359655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.359962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.359992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.360283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.360294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.360613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.360644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.360948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.360978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.361241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.361272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.361431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.361461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.361753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.361783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.362006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.362016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.362287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.362297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.362491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.362504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.362678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.362688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.362964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.362993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.363211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.363265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.363600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-07-15 18:52:01.363630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-07-15 18:52:01.363921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.363952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.364261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.364293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.364592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.364622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.364929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.364959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.365274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.365285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.365578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.365589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.365757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.365767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.366081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.366111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.366366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.366397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.366700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.366730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.367034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.367064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.367375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.367406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.367724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.367755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.368043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.368073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.368310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.368341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.368599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.368629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.368857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.368888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.369195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.369234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.369496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.369526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.369856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.369886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-07-15 18:52:01.370203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-07-15 18:52:01.370240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.370485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.370515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.370818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.370849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.371173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.371204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.371484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.371515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.371826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.371856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.372139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.372149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.372399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.372434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.372727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.372757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.373041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.373071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.373307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.373338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.373605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.373634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.373966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.373997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.374250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.374282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.374465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.374495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.374826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.374869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.375192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.375222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.375541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.375572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.375868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.375897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.376205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.376216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.376428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.376459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.376676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.376706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.377013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.377044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.377282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.377313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.377639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.377669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.377960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.377990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.378275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.810 [2024-07-15 18:52:01.378306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.810 qpair failed and we were unable to recover it. 00:26:44.810 [2024-07-15 18:52:01.378602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.378632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.378960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.378990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.379234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.379267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.379487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.379517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.379749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.379779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.380060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.380070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.380245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.380256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.380520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.380550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.380784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.380814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.381101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.381131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.381456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.381488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.381736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.381766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.382001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.382045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.382312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.382343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.382660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.382690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.382926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.382957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.383276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.383307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.383486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.383517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.383780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.383791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.384008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.384019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.384263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.384274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.384538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.384549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.384723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.384733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.384930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.384960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.385191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.385221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.385449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.385479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.385786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-15 18:52:01.385815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.811 qpair failed and we were unable to recover it. 00:26:44.811 [2024-07-15 18:52:01.386084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.386094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.386396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.386432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.386745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.386774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.387062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.387073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.387352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.387383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.387658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.387688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.387959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.387970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.388279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.388310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.388638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.388668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.388894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.388904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.389152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.389162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.389373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.389384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.389641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.389679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.389938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.389967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.390196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.390237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.390562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.390592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.390908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.390938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.391246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.391257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.391554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.391565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.391864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.391894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.392254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.392285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.392447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.392477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.392714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.392744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.392902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.392932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.393242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.393274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.393427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-15 18:52:01.393457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.812 qpair failed and we were unable to recover it. 00:26:44.812 [2024-07-15 18:52:01.393690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.393721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.394012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.394042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.394356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.394388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.394616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.394647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.394849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.394859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.395074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.395104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.395391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.395422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.395737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.395767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.395994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.396025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.396329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.396360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.396586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.396616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.396872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.396902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.397117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.397147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.397380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.397411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.397695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.397726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.398040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.398076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.398371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.398401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.398712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.398743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.399043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.399073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.399359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.399391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.399644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.399674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.399958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.399988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.400255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.400287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.400614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.400645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.400955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.400985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.813 [2024-07-15 18:52:01.401284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-15 18:52:01.401316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.813 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.401559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.401589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.401818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.401847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.402125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.402136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.402414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.402445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.402739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.402769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.403085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.403115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.403394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.403425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.403669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.403700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.403943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.403980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.404243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.404271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.404515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.404526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.404847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.404877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.405194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.405234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.405514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.405544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.405833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.405863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.406153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.406184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.406505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.406537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.406825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.406856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.407166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.407177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.407471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.407514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.407840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.407870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.408175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.408206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.408556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.408588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.408841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.408871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.409129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.409159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.409466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.409497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.814 [2024-07-15 18:52:01.409800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.814 [2024-07-15 18:52:01.409830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.814 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.410153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.410184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.410495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.410526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.410758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.410793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.411013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.411024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.411275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.411306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.411564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.411594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.411847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.411878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.412176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.412186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.412469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.412500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.412839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.412869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.413177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.413208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.413548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.413579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.413865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.413896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.414217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.414258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.414567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.414598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.414835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.414865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.415184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.415214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.415466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.415497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.415724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.415754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.415994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.416024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.416331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.416375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.416591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.416621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.416936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.416966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.417306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.417337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.417645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.417675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.417916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.417948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.815 [2024-07-15 18:52:01.418244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.815 [2024-07-15 18:52:01.418275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.815 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.418607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.418637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.418948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.418979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.419198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.419243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.419551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.419581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.419886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.419917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.420218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.420274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.420525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.420555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.420886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.420918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.421200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.421252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.421564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.421594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.421903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.421933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.422116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.422145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.422478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.422511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.422770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.422801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.423084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.423114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.423350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.423381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.423687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.423718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.424023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.424053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.424360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.424391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.424699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.424730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.424955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.424985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.425276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.425307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.425644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.425674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.425977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.426007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.426307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.426318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.426490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.426500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.816 [2024-07-15 18:52:01.426693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.816 [2024-07-15 18:52:01.426703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.816 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.426944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.426954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.427131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.427141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.427416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.427460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.427760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.427791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.428095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.428126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.428432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.428463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.428790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.428821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.429075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.429106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.429416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.429447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.429747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.429778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.430107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.430138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.430457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.430489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.430781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.430810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.431118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.431148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.431452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.431484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.431792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.431832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.431979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.432009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.432296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.432327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.432642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.432673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.432832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.432863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.433157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.433187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.433537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.433569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.433877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.433908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.434203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.434245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.434552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.434582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.434874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.434905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.435222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.435262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.435591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.435621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.435931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.435961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.436264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.436296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.436524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.436554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.436862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.436892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.437193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.437244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.437462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.437493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.437727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.437757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.437984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.817 [2024-07-15 18:52:01.437995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.817 qpair failed and we were unable to recover it. 00:26:44.817 [2024-07-15 18:52:01.438189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.438199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.438392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.438403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.438662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.438693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.438976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.439005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.439286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.439296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.439572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.439582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.439827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.439837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.440089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.440119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.440406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.440437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.440753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.440784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.441092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.441122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.441425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.441456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.441697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.441727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.441945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.441974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.442272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.442282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.442554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.442584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.442903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.442934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.443170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.443180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.443389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.443420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.443656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.443691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.443930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.443940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.444245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.444276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.444607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.444637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.444873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.444903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.445211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.445251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.445453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.445464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.445574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.445604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.445920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.445950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.446195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.446235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.446575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.446605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.446912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.446941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.447248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.447279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.447529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.447559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.447849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.447879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.448178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.448210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.448505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.448536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.448771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.448801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.449108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.449138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.449352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.449383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.449692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.818 [2024-07-15 18:52:01.449722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.818 qpair failed and we were unable to recover it. 00:26:44.818 [2024-07-15 18:52:01.450006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.450037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.450354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.450365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.450665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.450695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.451025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.451055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.451361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.451392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.451619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.451649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.451963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.451993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.452297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.452328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.452633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.452663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.452969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.452999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.453304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.453336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.453644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.453674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.453909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.453939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.454268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.454300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.454604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.454634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.454945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.454975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.455196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.455206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.455465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.455496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.455709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.455740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.455961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.455974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.456239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.456252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.456544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.456574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.456860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.456890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.457041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.457072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.457352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.457363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.457641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.457672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.457983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.458014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.458319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.458351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.458589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.458619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.458836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.458865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.459151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.459182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.459498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.459529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.819 [2024-07-15 18:52:01.459820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.819 [2024-07-15 18:52:01.459850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.819 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.460171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.460201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.460460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.460492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.460717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.460747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.461031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.461061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.461374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.461405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.461703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.461733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.461959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.461990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.462313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.462345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.462576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.462606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.462860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.462890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.463095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.463106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.463376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.463406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.463720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.463750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.464029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.464060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.464297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.464328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.464637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.464667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.464967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.464997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.465321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.465353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.465667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.465697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.465992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.466022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.466200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.466239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.466525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.466556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.466798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.466828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.467129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.467159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.467396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.467427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.467746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.467777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.468096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.468132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.468382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.468414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 18:52:01.468703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 18:52:01.468733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.468967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.468997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.469322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.469353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.469642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.469672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.469909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.469940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.470160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.470170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.470436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.470468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.470708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.470739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.470984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.471014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.471256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.471287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.471540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.471571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.471915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.471945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.472260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.472292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.472587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.472617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.472931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.472962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.473289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.473300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.473567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.473607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.473950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.473981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.474259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.474290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.474555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.474586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.474878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.474907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.475129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.475159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.475471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 18:52:01.475502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 18:52:01.475799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.475829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.476144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.476174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.476493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.476503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.476697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.476708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.476882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.476892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.477102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.477132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.477446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.477478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.477774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.477803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.478120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.478150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.478444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.478475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.478794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.478824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.479078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.479108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.479331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.479342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.479542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.479553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.479828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.479858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.480165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.480178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.480431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.480442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.480611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.480621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.480892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.480922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.481241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.481272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.481407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.481418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.481631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.481661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.481947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.481977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.482206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.482255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.482532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.482542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.482855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.482885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.483173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.483203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.483525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.483555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.483843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 18:52:01.483873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 18:52:01.484128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.484139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.484320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.484352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.484666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.484696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.485009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.485040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.485213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.485230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.485494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.485505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.485680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.485690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.485879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.485910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.486076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.486087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.486351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.486383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.486698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.486729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.486946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.486977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.487202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.487244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.487461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.487472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.487675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.487686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.487839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.487869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.488111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.488140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.488441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.488473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.488784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.488814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.489054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.489067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.489338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.489350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.489622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.489633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 18:52:01.489809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 18:52:01.489820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:45.094 [2024-07-15 18:52:01.490029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.094 [2024-07-15 18:52:01.490061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.094 qpair failed and we were unable to recover it. 00:26:45.094 [2024-07-15 18:52:01.490319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.094 [2024-07-15 18:52:01.490353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.094 qpair failed and we were unable to recover it. 00:26:45.094 [2024-07-15 18:52:01.490506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.094 [2024-07-15 18:52:01.490536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.094 qpair failed and we were unable to recover it. 00:26:45.094 [2024-07-15 18:52:01.490780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.094 [2024-07-15 18:52:01.490824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.094 qpair failed and we were unable to recover it. 00:26:45.094 [2024-07-15 18:52:01.491064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.491095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.491261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.491293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.491572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.491603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.491939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.491950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.492111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.492122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.492345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.492357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.492560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.492572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.492841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.492852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.493027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.493038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.493257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.493288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.493534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.493565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.493909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.493940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.494166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.494197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.494476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.494507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.494817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.494848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.495014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.495058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.495347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.495378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.495597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.495627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.495857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.495888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.496216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.496259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.496547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.496578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.496878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.496909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.497127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.497157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.497464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.497496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.497666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.497696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.497985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.498015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.498245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.498277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.498587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.498618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.498966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.498997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.499304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.499336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.499562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.499592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.095 [2024-07-15 18:52:01.499808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-15 18:52:01.499839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.095 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.500166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.500196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.500494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.500524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.500835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.500865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.501172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.501202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.501500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.501531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.501846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.501876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.502177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.502216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.502423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.502437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.502711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.502740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.502899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.502929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.503191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.503221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.503542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.503572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.503882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.503912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.504240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.504272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.504595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.504626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.504873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.504904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.505168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.505199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.505545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.505576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.505876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.505907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.506221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.506263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.506518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.506548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.506842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.506872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.507163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.507193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.507437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.507468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.507617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.507647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.507908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.507938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.508252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.508284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.508579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.508610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.508875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.508906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.509148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.509158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.509372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.509383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.509666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.509695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.509914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.509944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.510259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.510292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.510514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.510544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.510855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.510886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.511135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.511165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.511477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.511508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.096 qpair failed and we were unable to recover it. 00:26:45.096 [2024-07-15 18:52:01.511734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-15 18:52:01.511763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.512024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.512055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.512348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.512380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.512632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.512662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.512968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.512998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.513307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.513338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.513580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.513610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.513839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.513869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.514016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.514047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.514311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.514348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.514623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.514654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.514963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.514993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.515300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.515331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.515565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.515595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.515909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.515938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.516259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.516290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.516546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.516576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.516911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.516942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.517265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.517296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.517601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.517631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.517891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.517922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.518144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.518516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.518547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.518865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.518896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.519181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.519212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.519531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.519562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.519788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.519818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.519978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.520008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.520295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.520327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.520649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.520679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.520989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.521019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.521241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.521272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.521585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.521616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.521904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.521934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.522220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.522259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.522573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.522603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.522905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.522936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.523185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.523195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.523439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.523450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.523628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.523639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.097 [2024-07-15 18:52:01.523884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-15 18:52:01.523914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.097 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.524236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.524267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.524570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.524600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.524814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.524844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.525159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.525189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.525385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.525396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.525663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.525674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.525948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.525978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.526267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.526299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.526638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.526673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.526987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.527017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.527235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.527246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.527518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.527549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.527866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.527896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.528188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.528218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.528537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.528568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.528849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.528879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.529187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.529197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.529437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.529448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.529740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.529771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.530073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.530103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.530394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.530427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.530738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.530770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.531068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.531099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.531431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.531464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.531700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.531730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.532053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.532084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.532367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.532378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.532570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.532581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.532903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.532914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.533117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.533127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.533409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.533441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.533709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.533739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.534025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.534055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.534301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.534333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.534585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.534596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.534878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.534908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.535153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.535184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.535503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.535514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.535691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.535702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.098 qpair failed and we were unable to recover it. 00:26:45.098 [2024-07-15 18:52:01.535968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-15 18:52:01.535997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.536173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.536203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.536494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.536505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.536682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.536693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.536971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.537001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.537294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.537305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.537603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.537633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.537973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.538003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.538314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.538346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.538584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.538619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.538835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.538865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.539104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.539134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.539354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.539386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.539636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.539646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.539905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.539936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.540191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.540221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.540525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.540556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.540863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.540893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.541200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.541238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.541559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.541589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.541802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.541832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.542019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.542049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.542345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.542377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.542698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.542729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.543039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.543069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.543294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.543305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.543564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.543595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.543810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.543840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.544068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.544098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.544313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.544344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.544651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.544682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.544911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.544941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.545169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.545199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.545517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.099 [2024-07-15 18:52:01.545547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.099 qpair failed and we were unable to recover it. 00:26:45.099 [2024-07-15 18:52:01.545762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.545792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.546027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.546057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.546365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.546398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.546622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.546633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.546803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.546814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.547037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.547067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.547295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.547306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.547507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.547538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.547839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.547869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.548122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.548152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.548389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.548420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.548724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.548754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.549060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.549089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.549321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.549332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.549582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.549612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.549944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.549985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.550289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.550300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.550420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.550430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.550707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.550738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.550952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.550982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.551269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.551300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.551592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.551622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.551837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.551868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.552129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.552160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.552390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.552401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.552575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.552585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.552777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.552808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.553118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.553148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.553437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.553470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.553792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.553823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.554153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.554183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.554490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.554522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.554826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.554856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.555071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.555101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.555420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.555452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.555742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.555774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.556090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.556120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.556415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.556447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.556756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.556787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.100 qpair failed and we were unable to recover it. 00:26:45.100 [2024-07-15 18:52:01.557086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.100 [2024-07-15 18:52:01.557116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.557346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.557358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.557556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.557566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.557741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.557765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.557963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.557993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.558161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.558191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.558452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.558483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.558718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.558748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.559001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.559031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.559353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.559365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.559605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.559616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.559956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.559986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.560198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.560236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.560480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.560511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.560797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.560827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.561113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.561144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.561426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.561440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.561634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.561664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.561991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.562022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.562300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.562332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.562500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.562530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.562846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.562876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.563094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.563124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.563418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.563450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.563763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.563793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.564087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.564117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.564433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.564464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.564758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.564788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.565085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.565115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.565422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.565454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.565755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.565785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.566082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.566113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.566423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.566454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.566751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.566782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.567093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.567123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.567423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.567455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.567666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.567677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.567953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.567983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.568186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.568197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.568414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.568425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.101 [2024-07-15 18:52:01.568550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.101 [2024-07-15 18:52:01.568581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.101 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.568910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.568941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.569263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.569295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.569513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.569548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.569765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.569796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.570108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.570138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.570398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.570429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.570607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.570638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.570925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.570955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.571247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.571280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.571588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.571618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.571848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.571878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.572164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.572195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.572525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.572556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.572886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.572917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.573239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.573271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.573494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.573524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.573814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.573845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.574086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.574117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.574429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.574462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.574707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.574737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.574970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.575001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.575246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.575277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.575507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.575518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.575641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.575652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.575833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.575862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.576149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.576179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.576349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.576381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.576590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.576601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.576853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.576883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.577197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.577239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.577545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.577576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.577860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.577890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.578053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.578083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.578414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.578455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.578750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.578780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.579079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.579111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.579421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.579452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.579758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.579789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.580131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.102 [2024-07-15 18:52:01.580161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.102 qpair failed and we were unable to recover it. 00:26:45.102 [2024-07-15 18:52:01.580388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.580420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.580732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.580762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.581079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.581109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.581411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.581452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.581656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.581667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.581935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.581946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.582137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.582147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.582284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.582296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.582508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.582538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.582827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.582857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.583098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.583128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.583431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.583462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.583759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.583770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.584089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.584119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.584368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.584401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.584708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.584738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.585051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.585082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.585399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.585432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.585674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.585685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.585874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.585885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.586108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.586138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.586457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.586489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.586823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.586853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.587160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.587191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.587498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.587530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.587781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.587812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.588118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.588147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.588373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.588404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.588637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.588648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.588823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.588853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.103 qpair failed and we were unable to recover it. 00:26:45.103 [2024-07-15 18:52:01.589167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.103 [2024-07-15 18:52:01.589197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.589383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecc000 is same with the state(5) to be set 00:26:45.104 [2024-07-15 18:52:01.589750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.589793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.590062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.590097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.590379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.590413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.590655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.590686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.590978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.591008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.591237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.591252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.591510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.591524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.591748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.591762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.591948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.591962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.592244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.592276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.592501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.592531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.592757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.592771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.592926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.592962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.593257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.593289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.593607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.593640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.593961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.593993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.594276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.594309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.594598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.594628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.594937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.594967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.104 [2024-07-15 18:52:01.595202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.104 [2024-07-15 18:52:01.595241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.104 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.595563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.595594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.595885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.595915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.596236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.596269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.596480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.596511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.596792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.596822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.597188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.597277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.597664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.597714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.598058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.598093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.598357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.598368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.598571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.598582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.598772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.598783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.598914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.598943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.599150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.599179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.599523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.599553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.599874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.599904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.600138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.600168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.600499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.600529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.600804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.600834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.601121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.601157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.601437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.601448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.601708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.601739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.602018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.602049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.602275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.602306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.602609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.602640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.602908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.602938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.603097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.603127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.105 qpair failed and we were unable to recover it. 00:26:45.105 [2024-07-15 18:52:01.603446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.105 [2024-07-15 18:52:01.603476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.603720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.603750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.604030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.604060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.604391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.604423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.604723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.604755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.605039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.605070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.605324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.605355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.605589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.605619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.605890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.605920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.606180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.606210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.606435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.606465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.606762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.606791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.607093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.607122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.607441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.607454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.607699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.607729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.608049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.608079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.608245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.608290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.608573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.608604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.608845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.608874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.609069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.609109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.609383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.609398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.609691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.609722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.609973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.610004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.610163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.610196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.610461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.610478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.610634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.610668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.610903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.106 [2024-07-15 18:52:01.610933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.106 qpair failed and we were unable to recover it. 00:26:45.106 [2024-07-15 18:52:01.611174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.611204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.611410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.611422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.611689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.611718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.611953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.611983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.612268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.612298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.612533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.612546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.612737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.612748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.612960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.612971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.613263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.613295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.613623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.613653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.613877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.613906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.614154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.614184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.614467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.614506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.614685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.614716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.615007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.615037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.615342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.615373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.615646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.615676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.615996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.616026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.616252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.616284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.616579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.616608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.616765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.616796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.617097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.617128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.617413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.617445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.617669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.617701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.617947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.617977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.618269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.618301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.107 qpair failed and we were unable to recover it. 00:26:45.107 [2024-07-15 18:52:01.618476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.107 [2024-07-15 18:52:01.618506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.618813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.618842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.619143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.619173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.619488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.619498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.619686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.619697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.619901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.619931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.620153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.620183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.620418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.620430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.620693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.620723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.621006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.621035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.621312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.621344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.621641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.621671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.621968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.621998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.622219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.622260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.622542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.622573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.622821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.622851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.623107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.623137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.623388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.623419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.623660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.623690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.623924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.623959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.624166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.624196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.624471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.624483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.624720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.624731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.624911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.624921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.625058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.625068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.625256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.625267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.625382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.108 [2024-07-15 18:52:01.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 18:52:01.625579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.625609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.625887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.625916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.626200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.626238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.626569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.626600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.626828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.626858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.627075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.627105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.627331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.627342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.627585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.627615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.627838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.627868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.628097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.628127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.628436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.628447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.628589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.628599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.628851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.628881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.629020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.629050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.629280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.629312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.629633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.629663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.629879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.629909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.630263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.630294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.630585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.630615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.630855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.630886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.631165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.631196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.631503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.631534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.631779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.631809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.632115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.632146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.632411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.632443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.632619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.632650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.632813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.632843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 18:52:01.633100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.109 [2024-07-15 18:52:01.633130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.633360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.633392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.633552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.633583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.633894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.633932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.634088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.634119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.634421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.634453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.634707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.634738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.634975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.635005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.635220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.635262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.635599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.635629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.635917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.635947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.636179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.636210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.636391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.636422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.636721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.636752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.636945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.636976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.637281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.637313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.637489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.637519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.637805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.637835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.638143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.638173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.638478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.638510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.638810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.638820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 18:52:01.639087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.110 [2024-07-15 18:52:01.639097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.639266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.639277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.639464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.639494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.639748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.639778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.640078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.640108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.640422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.640453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.640739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.640769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.641052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.641083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.641319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.641330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.641524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.641535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.641827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.641858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.642143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.642177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.642337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.642370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.642585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.642595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.642846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.642879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.643116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.643146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.643456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.643488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.643704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.643734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.644031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.644061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.644368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.644399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.644629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.644659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.644959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.644989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.645272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.645304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.645614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.645644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.645866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.645896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.646178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.646208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.646527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.646557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.646781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.646812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.647104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.647133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.647444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.647475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.647797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.647827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.111 [2024-07-15 18:52:01.648113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.111 [2024-07-15 18:52:01.648144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.111 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.648443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.648474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.648772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.648809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.649070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.649101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.649321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.649353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.649583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.649594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.649716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.649727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.649998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.650028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.650335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.650366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.650534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.650544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.650685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.650715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.651012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.651043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.651326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.651357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.651645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.651674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.651901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.651931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.652146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.652176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.652326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.652336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.652461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.652471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.652649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.652679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.652987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.653017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.653252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.653289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.653584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.653615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.653765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.653775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.654051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.654080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.654313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.654345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.654593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.654623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.654820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.654850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.655181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.655211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.655477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.655508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.655676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.655686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.655882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.655912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.656197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.112 [2024-07-15 18:52:01.656237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.112 qpair failed and we were unable to recover it. 00:26:45.112 [2024-07-15 18:52:01.656485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.656496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.656692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.656721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.656969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.656999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.657206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.657246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.657533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.657563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.657777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.657808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.658021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.658052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.658360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.658392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.658614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.658645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.658879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.658909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.659189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.659220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.659524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.659554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.659765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.659795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.660080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.660110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.660420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.660451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.660807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.660838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.661145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.661175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.661423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.661454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.661771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.661802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.662093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.662124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.662432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.662465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.662704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.662734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.662982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.663012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.663167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.663197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.663488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.663520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.663727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.663738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.663929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.663939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.664124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.664154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.664368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.664405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.664650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.664680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.664920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.664931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.113 qpair failed and we were unable to recover it. 00:26:45.113 [2024-07-15 18:52:01.665116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.113 [2024-07-15 18:52:01.665126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.665260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.665292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.665530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.665560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.665871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.665902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.666059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.666089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.666248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.666279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.666534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.666564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.666843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.666872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.667116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.667146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.667362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.667393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.667675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.667705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.667996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.668030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.668336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.668368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.668542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.668573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.668881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.668911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.669181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.669212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.669475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.669506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.669729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.669759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.670039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.670070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.670287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.670319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.670543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.670573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.670855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.670885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.671098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.671129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.671412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.671444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.671633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.671663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.671886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.671896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.672032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.672054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.672255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.672286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.672512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.672523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.672711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.672741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.672960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.672990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.673275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.114 [2024-07-15 18:52:01.673306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.114 qpair failed and we were unable to recover it. 00:26:45.114 [2024-07-15 18:52:01.673535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.673545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.673786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.673816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.674021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.674052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.674293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.674325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.674548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.674560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.674821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.674870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.675091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.675125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.675358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.675390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.675605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.675636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.675918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.675949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.676177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.676207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.676424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.676461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.676659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.676672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.676922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.676933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.677181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.677212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.677482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.677514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.677747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.677778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.678079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.678111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.678323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.678355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.678592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.678628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.678864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.678876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.679158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.679170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.679293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.679304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.679491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.679521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.679755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.679784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.680026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.680056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.115 qpair failed and we were unable to recover it. 00:26:45.115 [2024-07-15 18:52:01.680310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.115 [2024-07-15 18:52:01.680342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.680501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.680532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.680704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.680734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.680882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.680892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.681059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.681069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.681180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.681210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.681479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.681510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.681754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.681784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.681993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.682023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.682304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.682337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.682646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.682676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.682888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.682918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.683146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.683176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.683432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.683463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.683674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.683704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.683958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.683969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.684083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.684093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.684216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.684232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.684334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.684344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.116 [2024-07-15 18:52:01.684461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.116 [2024-07-15 18:52:01.684474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.116 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.684679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.684690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.684872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.684902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.685185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.685215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.685385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.685415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.685570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.685599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.685803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.685814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.685946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.685975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.686130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.686160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.686441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.686473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.686709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.686720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.686916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.686946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.687248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.687279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.687468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.687478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.687609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.687647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.687789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.687819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.687991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.688021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.688171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.688201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.688379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.688409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.688619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.688629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.688816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.688846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.689062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.689093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.689388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.689420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.689700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.689730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.689943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.689954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.690140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.117 [2024-07-15 18:52:01.690170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.117 qpair failed and we were unable to recover it. 00:26:45.117 [2024-07-15 18:52:01.690340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.690371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.690586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.690616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.690851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.690861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.691053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.691082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.691267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.691299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.691525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.691555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.691753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.691763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.692003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.692032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.692244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.692274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.692432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.692462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.692618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.692647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.692869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.692898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.693044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.693073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.693353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.693384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.693618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.693653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.693877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.693908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.694081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.694111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.694342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.694373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.694667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.694703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.694947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.694977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.695194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.695236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.695468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.695498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.695644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.695673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.695938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.695968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.696194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.696238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.696480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.118 [2024-07-15 18:52:01.696510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.118 qpair failed and we were unable to recover it. 00:26:45.118 [2024-07-15 18:52:01.696782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.696793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.697036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.697066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.697297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.697329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.697622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.697632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.697816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.697827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.698028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.698058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.698218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.698259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.698412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.698442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.698606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.698636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.698943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.698991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.699274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.699305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.699464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.699493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.699706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.699716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.699986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.699996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.700247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.700258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.700425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.700436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.700562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.700590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.700801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.700831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.701125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.701154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.701392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.701424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.701640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.701670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.701948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.701977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.702138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.702168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.702362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.702392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.702672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.702703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.119 [2024-07-15 18:52:01.702926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.119 [2024-07-15 18:52:01.702955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.119 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.703185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.703214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.703445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.703475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.703774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.703809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.704030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.704060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.704277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.704308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.704527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.704537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.704710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.704740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.704950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.704980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.705264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.705296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.705441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.705470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.705764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.705775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.705878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.705888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.706152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.706180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.706416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.706447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.706593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.706623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.706868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.706898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.707203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.707242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.707520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.707550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.707759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.707789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.708017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.708047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.708339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.708370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.708615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.708646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.708929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.708959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.709169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.709199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.709418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.709449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.709616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.709645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.709807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.120 [2024-07-15 18:52:01.709817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.120 qpair failed and we were unable to recover it. 00:26:45.120 [2024-07-15 18:52:01.709923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.709933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.710037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.710047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.710235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.710246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.710376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.710386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.710467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.710477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.710608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.710618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.710746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.710776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.711001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.711031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.711250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.711282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.711491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.711520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.711661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.711690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.711910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.711940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.712168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.712198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.712487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.712517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.712821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.712850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.713179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.713214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.713477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.713506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.713782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.713811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.714099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.714129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.714270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.714301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.714526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.714556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.714726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.714756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.714966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.714995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.715217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.715258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.715467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.715497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.715725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.715735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.121 qpair failed and we were unable to recover it. 00:26:45.121 [2024-07-15 18:52:01.715924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.121 [2024-07-15 18:52:01.715935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.716047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.716057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.716235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.716266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.716496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.716526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.716800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.716829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.717048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.717079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.717309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.717340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.717556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.717586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.717795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.717805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.718055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.718065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.718261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.718292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.718432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.718462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.718687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.718717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.718930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.718940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.719184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.719213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.719536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.719567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.719892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.719961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.720213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.720260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.720545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.720576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.720790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.720820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.721047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.721078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.721365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.721395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.721602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.721618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.721798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.721829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.722066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.722095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.722317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.722348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.722620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.122 [2024-07-15 18:52:01.722651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.122 qpair failed and we were unable to recover it. 00:26:45.122 [2024-07-15 18:52:01.722946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.722977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.723237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.723269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.723479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.723510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.723736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.723767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.723942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.723972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.724134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.724164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.724333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.724366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.724581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.724611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.724911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.724942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.725167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.725197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.725495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.725531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.725675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.725705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.725911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.725921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.726043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.726071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.726349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.726381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.726540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.726570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.726782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.726799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.726919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.726934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.727209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.727250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.727426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.727457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.727738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.727769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.727929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.727960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.728191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.728221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.728424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.728456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.728759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.728800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.728922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.728936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.729089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.729119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.729331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.729363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.123 [2024-07-15 18:52:01.729627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.123 [2024-07-15 18:52:01.729657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.123 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.729872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.729886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.730088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.730102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.730244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.730275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.730562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.730592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.730807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.730821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.730944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.730982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.731192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.731222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.731487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.731518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.731743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.731774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.731976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.732006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.732219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.732260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.732471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.732502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.732729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.732760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.732973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.733004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.733237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.733275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.733504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.733534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.733758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.733789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.733992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.734007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.734254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.734285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.734510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.734540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.734704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.734734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.735033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.735064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.735235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.735266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.735427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.735458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.735610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.735625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.735803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.735833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.736069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.736099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.736377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.736408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.736638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.736668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.736905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.736935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.737162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.737191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.737437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.737468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.737681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.737711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.738024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.738054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.738307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.738338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.738606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.738637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.738864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.738895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.739180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.739209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.739383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.739414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.124 [2024-07-15 18:52:01.739540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.124 [2024-07-15 18:52:01.739571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.124 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.739843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.739874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.740020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.740051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.740270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.740302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.740525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.740556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.740731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.740762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.740934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.740964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.741263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.741295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.741575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.741605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.741846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.741876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.742038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.742068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.742341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.742373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.742605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.742635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.742845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.742859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.743046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.743076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.743243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.743273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.743421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.743457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.743617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.743631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.743880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.743910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.744115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.744355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.744387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.744613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.744644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.744873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.744903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.745088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.745126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.745303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.745335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.745562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.745593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.745866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.745897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.746116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.746131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.746318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.746333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.746447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.746478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.746723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.746754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.746928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.746958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.747118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.747149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.747359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.747390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.747688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.747721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.747950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.747964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.748149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.748180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.748410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.748441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.748584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.748614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.748831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.748861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.749039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.749069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.125 [2024-07-15 18:52:01.749243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.125 [2024-07-15 18:52:01.749275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.125 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.749488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.749519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.749736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.749773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.749978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.750010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.750206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.750251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.750474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.750506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.750808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.750823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.751032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.751046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.751153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.751167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.751283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.751297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.751436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.751450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.751624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.751638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.751734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.751747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.751947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.751977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.752144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.752174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.752426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.752459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.752746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.752776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.752926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.752956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.753102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.753116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.753280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.753296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.753399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.753413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.753586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.753600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.753704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.753718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.753828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.753842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.753974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.754016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.754221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.754263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.754474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.754505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.754719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.754734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.754853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.754884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.755134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.755163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.755402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.755434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.755673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.755702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.755898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.755929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.756169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.756199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.756527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.756595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.756926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.756994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.757178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.757210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.758385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.758412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.758648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.758681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.758957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.126 [2024-07-15 18:52:01.758988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.126 qpair failed and we were unable to recover it. 00:26:45.126 [2024-07-15 18:52:01.759213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.759233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.759524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.759554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.759784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.759816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.760113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.760146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.760361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.760396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.760606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.760654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.760824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.760854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.761077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.761107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.761370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.761402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.761560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.761589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.761806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.761820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.762088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.762118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.762260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.762291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.762514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.762544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.762690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.762720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.762943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.762979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.763112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.763125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.763250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.763283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.763407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.763439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.763856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.763873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.764057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.764073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.764318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.764350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.764560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.764591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.764797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.764828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.765051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.765081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.765266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.765298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.765522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.765552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.765699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.765713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.765904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.765934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.766088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.766118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.766279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.766317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.766449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.766478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.766759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.766789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.127 [2024-07-15 18:52:01.767001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.127 [2024-07-15 18:52:01.767032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.127 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.767284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.767315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.767585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.767615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.767833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.767863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.768081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.768095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.768311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.768325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.768427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.768440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.768547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.768561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.768681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.768695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.768896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.768925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.769076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.769107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.769393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.769425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.769574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.769606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.769825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.769839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.769960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.769989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.770143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.770173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.770310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.770342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.770504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.770535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.770691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.770722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.770912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.770927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.771111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.771141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.771357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.771388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.771597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.771628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.771785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.771814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.772036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.772078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.772248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.772279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.772502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.772532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.128 qpair failed and we were unable to recover it. 00:26:45.128 [2024-07-15 18:52:01.772755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.128 [2024-07-15 18:52:01.772770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.772887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.772918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.773125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.773154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.773311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.773343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.773507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.773538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.773815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.773845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.774008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.774038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.774189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.774221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.774530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.774560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.774806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.774837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.775129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.775144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.775329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.775344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.775483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.775497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.775669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.775683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.775799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.775829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.776042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.776072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.776355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.776386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.776609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.776640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.776867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.776897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.777064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.777094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.777246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.777278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.777512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.777543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.777788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.777818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.778024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.778054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.778282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.778314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.778541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.129 [2024-07-15 18:52:01.778572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.129 qpair failed and we were unable to recover it. 00:26:45.129 [2024-07-15 18:52:01.778738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.778753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.778888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.778918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.779070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.779101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.779255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.779286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.779501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.779534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.779775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.779806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.779961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.779991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.780151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.780183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.780401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.780433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.780645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.780677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.780823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.780852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.781002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.781032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.781269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.781307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.781515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.781545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.781762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.781792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.782012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.782026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.782138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.782152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.782269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.782284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.782460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.782473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.782641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.782655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.782852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.782866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.783033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.783048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.783174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.783188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.783320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.783335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.783519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.783533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.783770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.783784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.783907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.130 [2024-07-15 18:52:01.783921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.130 qpair failed and we were unable to recover it. 00:26:45.130 [2024-07-15 18:52:01.784110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.784124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.784244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.784259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.784448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.784463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.784580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.784594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.784768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.784782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.784955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.784969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.785093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.785108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.785296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.785312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.785413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.785428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.785520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.785534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.785714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.785729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.785913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.785927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.786096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.786110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.786305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.786319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.786517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.786531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.786654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.786668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.786761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.786775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.786974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.786987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.787179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.787193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.787310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.787328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.787459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.787473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.787646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.787660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.787852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.787866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.788049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.788063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.788188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.788202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.788313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.788327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.131 [2024-07-15 18:52:01.788535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.131 [2024-07-15 18:52:01.788549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.131 qpair failed and we were unable to recover it. 00:26:45.132 [2024-07-15 18:52:01.788723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.132 [2024-07-15 18:52:01.788737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.132 qpair failed and we were unable to recover it. 00:26:45.132 [2024-07-15 18:52:01.788847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.132 [2024-07-15 18:52:01.788861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.132 qpair failed and we were unable to recover it. 00:26:45.132 [2024-07-15 18:52:01.789033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.132 [2024-07-15 18:52:01.789047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.132 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.789163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.789177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.789300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.789315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.789556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.789572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.789749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.789763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.789866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.789880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.790061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.790076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.790190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.790205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.790385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.790400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.790494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.790508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.791325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.791352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.791499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.791515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.791706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.437 [2024-07-15 18:52:01.791720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.437 qpair failed and we were unable to recover it. 00:26:45.437 [2024-07-15 18:52:01.791925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.791939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.792104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.792118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.792360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.792391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.792601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.792631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.792785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.792815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.793089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.793119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.793339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.793370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.793534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.793564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.793740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.793770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.793998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.794028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.794244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.794276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.794431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.794467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.794741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.794772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.795071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.795101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.795293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.795325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.795495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.795525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.795735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.795765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.796010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.796040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.796261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.796292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.796507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.796537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.796743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.796772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.796983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.797013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.797293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.797308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.797479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.797493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.797665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.797679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.797842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.797856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.798029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.798059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.798291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.798322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.798493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.798523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.798735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.798765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.798913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.798942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.799149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.799179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.799335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.799366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.799586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.799616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.799840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.799870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.800028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.800058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.800274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.800304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.800451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.800481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.800713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.800743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.800968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.800999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.438 [2024-07-15 18:52:01.801159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.438 [2024-07-15 18:52:01.801190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.438 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.801363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.801395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.801547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.801576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.801716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.801746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.801884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.801914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.802122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.802152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.802376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.802407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.802711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.802742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.802878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.802892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.803070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.803100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.803337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.803368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.803606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.803635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.803781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.803795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.803904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.803918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.804169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.804207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.804379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.804410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.804634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.804664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.804871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.804901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.805220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.805257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.805412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.805442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.805738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.805768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.805924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.805953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.806093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.806123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.806376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.806408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.806685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.806714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.806881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.806911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.807062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.807075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.807182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.807196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.807369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.807383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.807516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.807546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.807703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.807732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.807941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.807971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.808200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.808213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.808334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.808349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.808444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.808458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.808582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.808596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.808867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.808896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.809024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.809054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.809215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.809255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.809450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.809485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.809701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.809730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-07-15 18:52:01.809944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-07-15 18:52:01.809974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.810123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.810137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.810372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.810387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.810496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.810510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.810657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.810671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.810855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.810869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.810992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.811005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.811266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.811281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.811396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.811410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.811565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.811595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.811741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.811771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.811915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.811945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.812099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.812113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.812319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.812334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.812444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.812458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.812706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.812736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.812944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.812974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.813173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.813186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.813397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.813411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.813537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.813567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.813719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.813749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.813888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.813918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.814055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.814097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.814211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.814335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.814349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.814484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.814498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.814680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.814711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.814926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.814956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.815171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.815201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.815434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.815705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.815736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.815883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.815913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.816137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.816167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.816323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.816354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.816504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.816535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.816807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.816836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.816986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.817016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.817156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.817186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.817359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.817390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.817518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.817553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.817692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-07-15 18:52:01.817722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-07-15 18:52:01.817881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.817922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.818154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.818167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.818453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.818484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.818709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.818739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.818903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.818933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.819146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.819160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.819279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.819317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.819482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.819512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.819653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.819683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.819834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.819863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.820087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.820116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.820264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.820278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.820464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.820497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.820711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.820741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.820886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.820900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.821073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.821087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.821300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.821331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.821544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.821574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.821717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.821748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.821961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.821991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.822260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.822291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.822509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.822539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.822708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.822738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.822942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.822971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.823113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.823143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.823438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.823474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.823626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.823657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.823878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.823907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.824072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.824102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.824339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.824372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.824595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.824625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.824794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.824824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.824978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.825008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.825173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.825216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.825347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.825361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.825556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.825587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.825852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.825882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-07-15 18:52:01.826094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-07-15 18:52:01.826124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.826339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.826371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.826520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.826550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.826788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.826819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.827023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.827037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.827238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.827268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.827558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.827588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.827748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.827777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.827932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.827962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.828114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.828144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.828350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.828382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.828540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.828569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.828788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.828801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.829014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.829044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.829182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.829212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.829436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.829466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.829716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.829746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.829897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.829927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.830091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.830126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.830245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.830260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.830443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.830473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.830639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.830670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.830886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.830915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.831069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.831082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.831256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.831270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.831393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.831423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.831665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.831695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.831915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.831945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.832089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.832119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.832278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.832330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.832559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.832589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.832797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.832828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.832946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.832960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.833086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.833100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.833215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.833233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.833335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.833349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.833480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.833493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.833677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-07-15 18:52:01.833690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-07-15 18:52:01.833823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.833837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.834104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.834118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.834276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.834291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.834423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.834437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.834549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.834563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.834688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.834702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.834885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.834899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.835098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.835111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.835290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.835304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.835494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.835508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.835626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.835639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.835757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.835771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.836009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.836024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.836141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.836154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.836333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.836364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.836556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.836586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.836760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.836790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.836961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.836975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.837109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.837145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.837373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.837404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.837632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.837662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.837881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.837910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.838111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.838142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.838392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.838423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.838628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.838658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.838864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.838894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.839045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.839075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.839216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.839260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.839449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.839462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.839645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.839674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.839909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.839924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.840037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.840050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.840271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.840303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.840448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.840477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.840763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.840792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.840951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.840982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.841189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.841218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.841374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.841405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.841610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.841641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.841803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.841833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.841992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-07-15 18:52:01.842022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-07-15 18:52:01.842184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.842198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.842401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.842432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.842597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.842626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.842780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.842810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.843015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.843045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.843297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.843328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.843536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.843566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.843787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.843817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.843991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.844021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.844153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.844167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.844288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.844302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.844420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.844434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.844670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.844683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.844864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.844894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.845034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.845064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.845280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.845311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.845583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.845614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.845836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.845867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.846028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.846064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.846273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.846305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.846602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.846631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.846795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.846825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.846970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.847000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.847275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.847307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.847442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.847472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.847610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.847639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.847845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.847875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.848028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.848058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.848266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.848297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.848524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.848555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.848781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.848810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.849087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.849118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.849286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.849300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.849436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.849466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.849683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.849714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.849847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.849877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.850090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.850120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.850332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.850362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.850568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.850598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.850856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.850886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.851030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-07-15 18:52:01.851044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-07-15 18:52:01.851262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.851293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.851424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.851453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.851617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.851647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.851803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.851834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.852040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.852075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.852231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.852262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.852481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.852511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.852626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.852656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.852871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.852886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.853054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.853068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.853248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.853280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.853483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.853512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.853664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.853694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.853885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.853915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.854054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.854084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.854310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.854341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.854496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.854526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.854809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.854840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.855115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.855184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.855375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.855410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.855563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.855593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.855759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.855791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.856063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.856093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.856331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.856341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.856463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.856493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.856700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.856729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.856894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.856924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.857083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.857112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.857333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.857363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.857611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.857642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.857788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.857818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.857973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.858011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.858160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.858195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.858293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.858304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.858485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.858515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.858726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.858756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.858929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.858960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.859129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.859139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.859299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.859309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.859423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.859453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-07-15 18:52:01.859678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-07-15 18:52:01.859707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.859845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.859876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.860017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.860047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.860212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.860250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.860454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.860484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.860826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.860856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.861101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.861131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.861288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.861319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.861489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.861520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.861737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.861767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.861910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.861940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.862212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.862256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.862478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.862508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.862665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.862694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.862837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.862872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.863038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.863047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.863161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.863191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.863417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.863448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.863678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.863708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.863854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.863884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.864958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.864968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.865135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.865165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.865325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.865356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-07-15 18:52:01.865574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-07-15 18:52:01.865604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.865755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.865789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.865942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.865951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.866129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.866159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.866379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.866409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.866681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.866711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.866876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.866906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.867112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.867122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.867238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.867262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.867502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.867532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.867758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.867788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.867937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.867967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.868171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.868201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.868353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.868363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.868587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.868616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.868787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.868816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.868972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.869002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.869160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.869189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.869407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.869437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.869637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.869667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.869814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.869843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.870008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.870038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.870264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.870296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.870435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.870464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.870672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.870701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.870930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.870960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.871181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.871211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.871373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.871383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.871514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.871524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.871688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.871698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.871889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.871918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.872064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.872093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.872246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.872276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.872433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.872443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.872549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.872560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.872666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.872676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.872772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.872783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.873013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.873025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.873199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.873235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.873404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.873434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.873661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-07-15 18:52:01.873690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-07-15 18:52:01.873933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.873963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.874110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.874153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.874312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.874323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.874431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.874461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.874701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.874730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.874953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.874983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.875182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.875193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.875291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.875301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.875628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.875638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.875742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.875752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.875980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.875990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.876191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.876202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.876314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.876325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.876519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.876549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.876724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.876753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.876918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.876947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.877163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.877174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.877354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.877364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.877559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.877589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.877743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.877772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.878017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.878047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.878194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.878223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.878411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.878440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.878712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.878742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.878871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.878900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.879050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.879079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.879219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.879234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.879400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.879435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.879641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.879670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.879821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.879850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.879987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.880017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.880187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.880216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.880365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.880376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.880499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.880509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.880622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.880632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.880799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.880828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.880967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.880996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.881143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.881173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.881330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.881341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.881441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.881451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-07-15 18:52:01.881569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-07-15 18:52:01.881579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.881812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.881823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.881985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.881994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.882085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.882095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.882254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.882285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.882559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.882590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.882812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.882842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.883053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.883063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.883168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.883197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.883357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.883387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.883591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.883620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.883837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.884006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.884035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.884184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.884214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.884387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.884417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.884574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.884603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.884744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.884781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.884945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.884959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.885156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.885173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.885359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.885374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.885497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.885512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.885625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.885637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.885805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.885815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.886051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.886061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.886171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.886181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.886354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.886366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.886542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.886552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.886708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.886721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.886896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.886906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.887013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.887024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.887119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.887129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.887236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.887247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.887359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.887369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.887475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.887485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.887593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.887603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.887752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.887782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.888055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.888085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.888357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.888388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.888603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.888633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.888790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.888819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.889053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-07-15 18:52:01.889083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-07-15 18:52:01.889240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.889271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.889428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.889458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.889668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.889698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.889968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.889997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.890134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.890165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.890363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.890374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.890557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.890567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.890683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.890693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.890811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.890840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.890978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.891007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.891194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.891205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.891346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.891356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.891468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.891478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.891585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.891595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.891760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.891771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.891953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.891963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.892120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.892130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.892258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.892268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.892429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.892457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.892650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.892681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.892829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.892858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.893067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.893095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.893301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.893333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.893473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.893502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.893711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.893741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.893944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.893975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.894099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.894133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.894351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.894362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.894555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.894585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.894729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.894759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.894900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.894929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.895204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.895246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.895375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.895403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-07-15 18:52:01.895595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-07-15 18:52:01.895625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.895752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.895783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.896013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.896042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.896203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.896242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.896387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.896397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.896504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.896513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.896604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.896614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.896771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.896781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.896939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.896949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.897190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.897220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.897470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.897500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.897651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.897681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.897876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.897905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.898052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.898080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.898221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.898266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.898406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.898436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.898590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.898621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.898889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.898919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.899074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.899103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.899376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.899622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.899652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.899923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.899953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.900074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.900084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.900269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.900281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.900385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.900396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.900504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.900532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.900750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.900781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.900997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.901026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.901181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.901190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.901319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.901329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.901442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.901453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.901684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.901713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.901874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.901903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.902057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.902091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.902284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.902296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.902420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.902447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.902611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.902641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.902912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.902942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.903160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-07-15 18:52:01.903189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-07-15 18:52:01.903359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.903391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.903604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.903635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.903774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.903804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.903956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.903985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.904191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.904222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.904399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.904429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.904653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.904684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.904902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.904932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.905080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.905110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.905263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.905298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.905419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.905429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.905522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.905532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.905619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.905629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.905735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.905745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.905836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.905846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.906020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.906031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.906158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.906169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.906283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.906293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.906398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.906409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.906478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.906488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.906660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.906690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.906910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.906940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.907081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.907111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.907255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.907265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.907381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.907408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.907632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.907663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.907818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.907848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.908121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.908150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.908302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.908334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.908495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.908506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.908670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.908679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.908837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.908847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.909025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.909055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.909276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.909308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.909457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.909492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.909642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.909671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-07-15 18:52:01.909819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-07-15 18:52:01.909849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.910007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.910037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.910276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.910309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.910451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.910482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.910621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.910630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.910810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.910840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.911131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.911162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.911308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.911347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.911445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.911468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.911697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.911708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.911807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.911817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.912069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.912106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.912270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.912301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.912457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.912486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.912703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.912732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.912866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.912895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.913053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.913083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.913243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.913274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.913480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.913509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.913722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.913751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.913979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.914009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.914228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.914239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.914439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.914469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.914627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.914657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.914880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.914909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.915065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.915075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.915242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.915272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.915487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.915517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.915729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.915759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.915900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.915930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.916142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.916152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.916406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.916437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.916661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.916690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.916903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.916932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.917149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.917160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.917272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.917283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.917449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.917459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.917579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.917588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.917750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.917785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.918102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.918132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.918298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-07-15 18:52:01.918329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-07-15 18:52:01.918629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.918659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.918807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.918837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.919066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.919096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.919309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.919320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.919580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.919611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.919832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.919863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.920078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.920108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.920372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.920382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.920560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.920589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.920823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.920854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.921037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.921066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.921238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.921268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.921419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.921449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.921603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.921633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.921936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.921966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.922181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.922212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.922382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.922392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.922488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.922498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.922608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.922618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.922802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.922831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.923037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.923066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.923282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.923293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.923479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.923509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.923668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.923698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.923839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.923868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.924089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.924118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.924274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.924303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.924530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.924561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.924749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.924760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.924870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.924898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.925157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.925186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.925359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.925389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.925516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.925527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.925680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.925717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.925884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.925913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.926062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.926094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.926251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.926285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.926455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.926467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.926638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.926649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.926748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.926759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-07-15 18:52:01.926933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-07-15 18:52:01.926942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.927042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.927071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.927281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.927312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.927471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.927500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.927733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.927762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.927919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.927949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.928171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.928182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.928286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.928296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.928415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.928425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.928522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.928531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.928633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.928644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.928759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.928769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.928944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.928974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.929174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.929203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.929433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.929444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.929544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.929554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.929653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.929664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.929841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.929852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.930022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.930052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.930198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.930232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.930470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.930500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.930655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.930684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.930827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.930857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.931075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.931105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.931340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.931371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.931585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.931615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.931776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.931805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.931949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.931978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.932146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.932176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.932469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-07-15 18:52:01.932480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-07-15 18:52:01.932581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.932591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.932701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.932711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.932875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.932885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.933072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.933102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.933245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.933275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.933393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.933423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.933562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.933592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.933747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.933782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.933909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.933939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.934144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.934173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.934356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.934387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.934530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.934560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.934716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.934746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.934910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.934940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.935080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.935109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.935372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.935403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.935619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.935649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.935874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.935904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.936111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.936121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.936229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.936239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.936359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.936369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.936553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.936563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.936748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.936778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.937014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.937044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.937268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.937298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.937555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.937589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.937745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.937774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.937913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.938091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.938121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.938279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.938289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.938446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.938476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.938650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.938679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.938899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.938929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.939080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.939110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.939277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.939288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.939391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.939401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.939509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.939539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.939688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.939717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.939871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.939901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.940110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.940139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-07-15 18:52:01.940359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-07-15 18:52:01.940369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.940464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.940474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.940582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.940592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.940762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.940791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.940969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.940999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.941191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.941221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.941485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.941495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.941588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.941599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.941773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.941804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.941947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.941976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.942124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.942154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.942422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.942453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.942616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.942646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.942866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.942895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.943042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.943072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.943348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.943378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.943534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.943564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.943768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.943798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.943938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.943968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.944171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.944200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.944407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.944417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.944584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.944594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.944719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.944748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.944917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.944946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.945112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.945142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.945305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.945316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.945523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.945533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.945635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.945645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.945746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.945757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.945855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.945864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.945967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.945977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.946086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.946095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.946201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.946212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.946449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.946460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.946627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.946657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.946863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.946892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.947099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.947129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.947338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.947369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.947514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.947543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.947690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.947720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-07-15 18:52:01.947871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-07-15 18:52:01.947901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.948040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.948070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.948232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.948269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.948390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.948401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.948526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.948536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.948631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.948660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.948803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.948832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.949038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.949071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.949279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.949289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.949472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.949501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.949725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.949755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.949965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.949994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.950140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.950150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.950327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.950337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.950501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.950531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.950693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.950724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.950862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.950892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.951101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.951130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.951280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.951311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.951526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.951555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.951764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.951793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.951941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.951970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.952135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.952165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.952369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.952380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.952487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.952496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.952667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.952677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.952778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.952807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.952966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.952996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.953151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.953180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.953353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.953386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.953631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.953641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.953759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.953769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.953878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.953888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.954068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.954078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.954217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.954261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.954388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.954403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.954584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.954598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.954735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.954765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.954917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.954946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.955084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.955114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.955251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-07-15 18:52:01.955283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-07-15 18:52:01.955431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.955670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.955701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.955972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.956010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.956128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.956143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.956333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.956347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.956519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.956532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.956719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.956733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.956844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.956857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.956986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.957000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.957107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.957121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.957271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.957303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.957460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.957491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.957722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.957752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.957889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.957920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.958076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.958106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.958312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.958327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.958437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.958467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.958621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.958651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.958861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.958892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.959113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.959143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.959344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.959381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.959597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.959628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.959777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.959807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.960016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.960046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.960282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.960313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.960451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.960481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.960626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.960656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.960977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.961006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.961145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.961175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.961395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.961427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.961596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.961626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.961773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.961803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.962016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.962046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.962263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.962294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-07-15 18:52:01.962540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-07-15 18:52:01.962570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.962783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.962812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.962972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.963002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.963169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.963199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.963428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.963459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.963681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.963712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.963922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.963953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.964114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.964145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.964428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.964459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.964617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.964647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.964957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.964987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.965144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.965174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.965341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.965372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.965610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.965640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.965863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.965894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.966059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.966089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.966297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.966311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.966420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.966450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.966674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.966703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.966933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.966964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.967130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.967159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.967315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.967346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.967577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.967607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.967765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.967795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.968021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.968051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.968333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.968347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.968520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.968551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.968770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.968839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.969071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.969104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.969340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.969375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.969587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.969601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.969776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.969806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.970014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.970044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.970263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.970295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.970432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.970462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.970697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.970727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.970868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.970898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.971051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.971082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.971265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.971297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.971434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.971464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-07-15 18:52:01.971600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-07-15 18:52:01.971638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.971873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.971904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.972108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.972138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.972288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.972320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.972475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.972489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.972591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.972621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.972883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.972913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.973130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.973160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.973382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.973399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.973521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.973536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.973756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.973770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.973954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.973983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.974278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.974309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.974528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.974558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.974868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.974899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.975145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.975176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.975359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.975389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.975629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.975659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.975887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.975917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.976075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.976105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.976267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.976282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.976428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.976458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.976607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.976636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.976870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.976901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.977043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.977074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.977234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.977249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.977351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.977366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.977486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.977503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.977643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.977673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.977922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.977952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.978156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.978198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.978460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.978475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.978605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.978635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.978882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.978913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.979065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.979095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-07-15 18:52:01.979299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-07-15 18:52:01.979331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.979548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.979578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.979729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.979759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.980018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.980048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.980218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.980267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.980395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.980409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.980624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.980654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.980807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.980836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.980995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.981024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.981150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.981180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.981335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.981355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.981530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.981560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.981797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.981827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.981997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.982027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.982237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.982278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.982513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.982544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.982845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.982875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.983101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.983131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.983297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.983328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.983546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.983577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.983826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.983856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.984131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.984170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.984286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.984301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.984409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.984422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.984592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.984621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.984762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.984792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.985002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.985031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.985250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.985281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.985497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.985526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.985745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.985776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.985914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.985942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.986235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.986266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.986454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.986474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.986646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.986676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.986829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.986857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.987016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.987046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.987187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.987217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.987382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.987412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.987706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.987738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.988010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.988041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.988207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.988246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-07-15 18:52:01.988390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-07-15 18:52:01.988403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.988524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.988668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.988681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.988862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.988892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.989149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.989179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.989356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.989387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.989610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.989639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.989900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.989929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.990144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.990174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.990400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.990431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.990633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.990647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.990775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.990804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.991028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.991056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.991267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.991298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.991496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.991509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.991769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.991800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.992019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.992048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.992324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.992355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.992546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.992576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.992730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.992760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.992967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.992996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.993200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.993237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.993459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.993472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.993607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.993622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.993741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.993754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.993960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.993974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.994079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.994092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.994218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.994254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.994419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.994448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.994594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.994624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.994783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.994813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.995082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.995117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.995325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.995340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.995622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.995652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.995869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.995899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.996047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.996076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.996231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.996245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.996362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.996377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.996536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.996565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.996717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.996746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.996981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-07-15 18:52:01.997011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-07-15 18:52:01.997150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.997180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.997398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.997429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.997660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.997674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.997913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.997927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.998121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.998151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.998344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.998374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.998589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.998617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.998776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.998806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.998966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.998995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.999142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.999172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.999393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.999425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.999572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.999585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.999688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.999702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:01.999870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:01.999884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.000087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.000117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.000287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.000318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.000469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.000498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.000650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.000664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.000779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.000792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.000953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.000967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.001148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.001163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.001369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.001383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.001499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.001512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.001633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.001647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.001839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.001869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.002009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.002038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.002254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.002286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.002412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.002426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.002543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.002556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.002772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.002786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.002959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.002975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.003093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.003121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.003343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.003373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-07-15 18:52:02.003515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-07-15 18:52:02.003544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.003699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.003713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.003818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.003831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.003951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.003966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.004137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.004167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.004415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.004447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.004616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.004645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.004865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.004895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.005049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.005078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.005218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.005241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.005449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.005463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.005582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.005611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.005902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.005932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.006080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.006110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.006261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.006294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.006435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.006449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.006542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.006557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.006801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.006830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.006970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.006999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.007204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.007241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.007387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.007401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.007613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.007643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.007782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.007812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.007969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.007998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.008212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.008249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.008405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.008435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.008675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.008705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.009048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.009078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.009236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.009267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.009560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.009590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.009764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.009794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.009937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.009967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.010133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.010162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.010324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.010355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.010513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.010543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.010693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.010723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.010876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.010904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.011050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.011084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.011314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.011345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.011542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.011572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.011734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-07-15 18:52:02.011765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-07-15 18:52:02.011972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.012002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.012223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.012260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.012421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.012450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.012561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.012590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.012812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.012842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.012983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.013013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.013159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.013188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.013516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.013546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.013698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.013728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.013887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.013917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.014074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.014105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.014280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.014313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.014512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.014542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.014688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.014702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.014967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.014996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.015162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.015192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.015346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.015378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.015583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.015597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.015790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.015820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.015975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.016006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.016262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.016292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.016504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.016519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.016691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.016721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.016879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.016908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.017111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.017141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.017287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.017318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.017473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.017503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.017718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.017748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.018840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.018865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.019038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.019071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.019351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.019384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.019601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.019631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.019842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.019873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.020033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.020063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.020209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.020249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.020403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.020433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.020704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.020741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.020910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.020941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.021106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.021136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.021369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.021401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.467 qpair failed and we were unable to recover it. 00:26:45.467 [2024-07-15 18:52:02.021553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.467 [2024-07-15 18:52:02.021569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.021692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.021706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.021886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.021900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.022088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.022118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.022289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.022321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.022482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.022513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.022787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.022819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.022967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.022998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.023919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.023945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.024148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.024163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.024408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.024423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.024579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.024609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.024829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.024859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.025088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.025119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.025276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.025308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.025470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.025502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.025675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.025705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.025946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.025978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.026199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.026250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.026514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.026547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.026746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.026761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.026886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.026917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.027127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.027160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.027404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.027448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.027640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.027654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.027763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.027776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.027899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.027913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.028155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.028168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.028283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.028297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.028421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.028436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.028559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.028573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.028795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.028809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.028931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.028945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.029049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.029063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.029220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.029239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.029361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.029374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.029486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.029503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.029617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.029631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.029706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.029719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.029821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.468 [2024-07-15 18:52:02.029834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.468 qpair failed and we were unable to recover it. 00:26:45.468 [2024-07-15 18:52:02.030019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.030033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.030157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.030171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.030319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.030332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.030503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.030518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.030643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.030656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.030769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.030783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.031058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.031073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.031188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.031202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.031326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.031339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.031447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.031460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.031688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.031703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.031893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.031907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.032954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.032968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.033168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.033195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.033315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.033327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.033426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.033436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.033608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.033619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.033730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.033741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.033973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.033984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.034112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.034124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.034229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.034240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.034357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.034367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.034468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.034478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.034585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.034594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.034689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.034699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.034821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.034831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.035011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.035021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.469 qpair failed and we were unable to recover it. 00:26:45.469 [2024-07-15 18:52:02.035183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.469 [2024-07-15 18:52:02.035193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.035310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.035320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.035430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.035442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.035629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.035639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.035743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.035753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.035916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.035925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.036970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.036979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.037075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.037085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.037192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.037203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.037309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.037319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.037426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.037436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.037679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.037689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.037781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.037792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.037955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.037965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.038190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.038201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.038306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.038317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.038426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.038436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.038594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.038604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.038714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.038724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.038816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.038826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.038928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.038938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.039170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.039180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.039299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.039309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.039409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.039419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.039578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.039588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.039750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.039760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.039927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.039937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.040100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.040110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.040211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.040221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.040414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.040424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.040603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.040613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.040722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.470 [2024-07-15 18:52:02.040732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.470 qpair failed and we were unable to recover it. 00:26:45.470 [2024-07-15 18:52:02.040826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.040836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.040932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.040942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.041053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.041065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.041174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.041184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.041307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.041317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.041534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.041544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.041641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.041651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.041760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.041770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.041866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.041877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.042061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.042071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.042270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.042280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.042391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.042401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.042573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.042583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.042686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.042695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.042809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.042818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.043933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.043943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.044123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.044132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.044230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.044240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.044416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.044426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.044525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.044535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.044639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.044650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.044744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.044754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.044919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.044929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.045088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.045098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.045259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.045270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.045370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.045380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.045617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.045647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.045857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.045887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.046161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.046191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.046472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.046503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.046716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.046754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.046953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.046963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.047169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.047179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.047286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.047297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.047419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.471 [2024-07-15 18:52:02.047429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.471 qpair failed and we were unable to recover it. 00:26:45.471 [2024-07-15 18:52:02.047543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.047555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.047673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.047683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.047841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.047851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.047974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.048003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.048158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.048188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.048467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.048498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.048709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.048720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.048964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.048994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.049205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.049247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.049474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.049504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.049659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.049688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.049830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.049859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.050157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.050187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.050407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.050437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.050661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.050691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.050907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.050937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.051082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.051112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.051311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.051343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.051563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.051593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.051731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.051760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.051965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.051995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.052199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.052237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.052388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.052397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.052518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.052528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.052620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.052629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.052911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.052941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.053173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.053203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.053443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.053478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.053620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.053650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.053899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.053930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.054084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.054114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.054259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.054289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.054444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.054474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.054719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.054734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.054920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.054934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.055050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.055064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.055315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.055346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.055498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.055526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.055738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.055768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.055987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.056017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.056241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.056277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.056395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.056408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.056534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.472 [2024-07-15 18:52:02.056548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.472 qpair failed and we were unable to recover it. 00:26:45.472 [2024-07-15 18:52:02.056699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.056713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.056892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.056905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.057015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.057029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.057143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.057157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.057271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.057301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.057522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.057551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.057771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.057800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.058017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.058047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.058197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.058235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.058406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.058444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.058558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.058573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.058760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.058790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.059019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.059049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.059191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.059220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.059435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.059449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.059572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.059602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.059739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.059769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.059943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.059973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.060133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.060162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.060311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.060342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.060490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.060520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.060658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.060672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.060786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.060800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.060990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.061004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.061214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.061251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.061386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.061401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.061512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.061545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.061700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.061730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.061880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.061910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.062119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.062149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.062304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.062337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.062494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.062524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.062664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.062677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.062788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.062802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.062973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.062986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.063099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.063129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.063278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.473 [2024-07-15 18:52:02.063310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.473 qpair failed and we were unable to recover it. 00:26:45.473 [2024-07-15 18:52:02.063532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.063571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.063714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.063729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.063895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.063909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.064020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.064034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.064239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.064273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.064416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.064446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.064601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.064631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.064769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.064783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.064903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.064917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.065023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.065037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.065248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.065260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.065355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.065365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.065485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.065514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.065736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.065766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.065927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.065959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.066116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.066146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.066351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.066381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.066592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.066631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.066801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.066811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.067030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.067060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.067270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.067300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.067578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.067608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.067824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.067853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.068003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.068032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.068189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.068219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.068440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.068451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.068627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.068656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.068882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.068949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.069109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.069144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.069361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.069392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.069556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.069586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.069727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.069757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.069922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.069952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.070178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.070209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.070436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.070467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.070610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.070653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.070827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.070841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.071019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.071049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.071194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.071238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.071395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.071425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.071557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.071571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.474 [2024-07-15 18:52:02.071677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.474 [2024-07-15 18:52:02.071691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.474 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.071814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.071828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.071976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.072006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.072160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.072191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.072368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.072401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.072554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.072585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.072744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.072773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.072910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.072940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.073101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.073131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.073448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.073479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.073683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.073714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.073993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.074023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.074171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.074200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.074394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.074425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.074651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.074681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.074833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.074864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.075150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.075179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.075329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.075360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.075512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.075542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.075702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.075732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.075885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.075915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.076086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.076115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.076270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.076301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.076451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.076480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.076636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.076666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.076802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.076832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.077042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.077078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.077239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.077271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.077406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.077436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.077592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.077622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.077787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.077817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.078025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.078055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.078211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.078251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.078403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.078433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.078579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.078609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.078802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.078817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.078925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.078955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.079119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.079149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.079444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.079475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.475 [2024-07-15 18:52:02.079631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.475 [2024-07-15 18:52:02.079662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.475 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.079892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.079922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.080171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.080201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.080436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.080467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.080687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.080717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.080853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.080868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.081056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.081087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.081293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.081324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.081554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.081584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.081808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.081822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.081941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.081954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.082065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.082078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.082195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.082209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.082398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.082429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.082645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.082676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.082836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.082866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.083012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.083042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.083311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.083342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.083557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.083587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.083790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.083819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.084042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.084072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.084343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.084383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.084605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.084619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.084746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.084759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.084995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.085008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.085207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.085245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.085414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.085444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.085686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.085721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.085855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.085868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.085990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.086004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.086114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.086128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.086243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.086258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.086515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.086529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.086766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.086795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.087036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.087066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.087279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.087311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.087528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.087558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.087782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.087796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.087989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.088019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.088255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.088286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.088511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.088541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.088686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.088701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.088805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.088820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.089010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.476 [2024-07-15 18:52:02.089040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.476 qpair failed and we were unable to recover it. 00:26:45.476 [2024-07-15 18:52:02.089187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.089216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.089376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.089407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.089542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.089555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.089721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.089734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.089909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.089922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.090094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.090114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.090288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.090303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.090424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.090438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.090677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.090691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.090864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.090878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.091052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.091065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.091286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.091319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.091484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.091514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.091723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.091753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.091980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.092010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.092172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.092202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.092415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.092429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.092562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.092592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.092799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.092828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.093052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.093082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.093301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.093332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.093559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.093573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.093706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.093720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.093822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.093854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.093971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.094001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.094220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.094258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.094533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.094563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.094731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.094761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.094960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.094974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.095078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.095091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.095281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.095296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.095418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.095432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.095543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.095556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.095731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.095745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.095865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.095880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.095991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.096022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.096238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.096270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.096404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.477 [2024-07-15 18:52:02.096434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.477 qpair failed and we were unable to recover it. 00:26:45.477 [2024-07-15 18:52:02.096576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-07-15 18:52:02.096606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-07-15 18:52:02.096757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-07-15 18:52:02.096787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-07-15 18:52:02.096952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-07-15 18:52:02.096981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-07-15 18:52:02.097251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-07-15 18:52:02.097282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-07-15 18:52:02.097419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-07-15 18:52:02.097450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.478 [2024-07-15 18:52:02.097585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.478 [2024-07-15 18:52:02.097615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.478 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.097914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.097945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.098104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.098135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.098267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.098298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.098500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.098514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.098658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.098688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.098891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.098921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.099067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.099097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.099304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-07-15 18:52:02.099335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-07-15 18:52:02.099495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.099508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.099625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.099656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.100875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.100900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.101248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.101281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.101495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.101526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.101738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.101768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.101979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.102010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.102244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.102274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.102520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.102550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.102733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.102763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.102977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.102991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.103098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.103136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.103432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.103463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.103636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.103666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.103945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.103976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.104143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.104173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.104474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.104505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.104708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.104738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.105013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.105043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.105259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.105291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.105451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.105481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.105774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.105811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.105966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.105997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.106216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.106255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.106417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.106448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.106610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.106650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.106945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.106975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.107122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.107153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.107303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.107334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.107501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.107537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.107653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.107667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.107904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.107934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.108147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.108176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.108354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.108385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.108543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.108573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.108805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.108836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.109054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.109085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.109306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.109337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.109485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.109500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.109690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-07-15 18:52:02.109720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-07-15 18:52:02.109897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.109927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.110098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.110128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.110278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.110308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.110559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.110572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.110745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.110775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.110987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.111017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.111240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.111270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.111409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.111440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.111644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.111674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.111898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.111929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.112178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.112208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.112464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.112501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.112694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.112725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.112930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.112944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.113075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.113105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.113265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.113297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.113515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.113545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.113681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.113712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.113866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.113896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.114043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.114057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.114173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.114208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.114388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.114419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.114566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.114596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.114735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.114750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.114887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.114901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.115074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.115088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.115281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.115311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.115460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.115490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.115695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.115725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.115862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.115892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.116030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.116061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.116267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.116298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.116584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.116614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.116770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.116801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.117024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.117038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.117214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.117307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.117463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.117493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.117609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.117646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.117763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.117777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.117978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-07-15 18:52:02.118008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-07-15 18:52:02.118222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.118263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.118422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.118452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.118679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.118710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.118926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.118956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.119109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.119139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.119358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.119389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.119608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.119638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.119796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.119827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.120083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.120098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.120293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.120307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.120507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.120521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.120639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.120655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.120852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.120882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.121016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.121045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.121194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.121247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.121398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.121428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.121601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.121631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.121880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.121911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.122127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.122157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.122297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.122328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.122479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.122518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.122641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.122654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.122898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.122928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.123146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.123176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.123435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.123466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.123693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.123724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.123879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.123893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.124148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.124179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.124418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.124449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.124731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.124761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.124987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.125017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.125249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.125281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.125495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.125526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.125679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.125709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.125859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.125889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.126093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.126123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.126280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.126310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.126454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.126483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.126742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.126772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.127018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-07-15 18:52:02.127032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-07-15 18:52:02.127203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.127216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.127446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.127478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.127588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.127618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.127823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.127853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.128072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.128086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.128270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.128284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.128456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.128486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.128650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.128680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.128893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.128923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.129137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.129168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.129504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.129535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.129753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.129788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.129941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.129970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.130274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.130305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.130439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.130470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.130743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.130773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.130952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.130966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.131234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.131249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.131456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.131469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.131584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.131598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.131723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.131736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.131857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.131871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.132123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.132136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.132252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.132266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.132382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.132396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.132700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.132730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.132945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.132976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.133134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.133165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.133319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.133350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.133505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.133535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.133754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.133768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.133887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.133917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.134117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.134148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.134358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.134389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.134542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.134572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.134779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.134809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.134966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.134997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.135243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.135275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.135500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.135530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.135685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.135699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-07-15 18:52:02.135803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-07-15 18:52:02.135816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.136001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.136014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.136155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.136184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.136349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.136380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.136561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.136592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.136750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.136764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.136943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.136972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.137269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.137300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.137521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.137534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.137649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.137684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.137948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.137978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.138124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.138159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.138323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.138354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.138603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.138633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.138794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.138824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.139081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.139111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.139331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.139362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.139512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.139543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.139691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.139721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.139876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.139906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.140120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.140150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.140299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.140329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.140546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.140576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.140803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.140843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.140960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.140974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.141142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.141156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.141278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.141292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.141408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.141438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.141599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.141629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.141849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.141879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.142083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.142097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.142316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.142347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.142518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-07-15 18:52:02.142548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-07-15 18:52:02.142842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.142872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.143011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.143041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.143252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.143282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.143488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.143518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.143722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.143752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.144003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.144037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.144380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.144448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.144678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.144711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.144871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.144902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.145029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.145060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.145211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.145252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.145475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.145505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.145654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.145684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.145804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.145815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.145913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.145924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.146117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.146127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.146298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.146308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.146473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.146483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.146582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.146595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.146706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.146716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.146847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.146857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.147049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.147079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.147349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.147381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.147614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.147643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.147858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.147887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.148054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.148083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.148310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.148341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.148592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.148622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.148831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.149124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.149153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.149363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.149394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.149663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.149693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.149910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.149940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.150148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.150178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.150464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.150502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.150612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.150622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.150739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.150748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.150851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.150861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.150969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.150979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.151092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.151102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-07-15 18:52:02.151270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-07-15 18:52:02.151281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.151511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.151540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.151770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.152013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.152043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.152277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.152307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.152488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.152525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.152686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.152701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.152820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.152834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.152989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.153020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.153170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.153199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1248362 Killed "${NVMF_APP[@]}" "$@" 00:26:45.768 [2024-07-15 18:52:02.153416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.153448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.153601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.153631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.153791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.153804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:45.768 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:45.768 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.768 [2024-07-15 18:52:02.154812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.154842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.768 [2024-07-15 18:52:02.155035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.155047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.768 [2024-07-15 18:52:02.155252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.155286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.156394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.156415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.156520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.156531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.156941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.156953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.157093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.157103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.157214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.157229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.157401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.157411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.157510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.157520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.157689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.157698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.157862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.157873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.158043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.158053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.158155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.158165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.158309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.158320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.158420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.158431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.158608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.158620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.158745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.158755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.158963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.158973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.159085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.159096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.159270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.159280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.159386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.159396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.159563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.159592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.159800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.159831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.160048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.160058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-07-15 18:52:02.160166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-07-15 18:52:02.160177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.160283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.160293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.160458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.160468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.160581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.160591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.160770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.160780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.160875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.160885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.161003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.161013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1249295 00:26:45.769 [2024-07-15 18:52:02.161134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.161145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1249295 00:26:45.769 [2024-07-15 18:52:02.161322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.161333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1249295 ']' 00:26:45.769 [2024-07-15 18:52:02.161485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.161496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.769 [2024-07-15 18:52:02.161665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.769 [2024-07-15 18:52:02.161675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.769 [2024-07-15 18:52:02.161803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.161817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.161912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.769 [2024-07-15 18:52:02.161922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:45.769 18:52:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:45.769 [2024-07-15 18:52:02.162118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.162129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.162244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.162255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.162364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.162374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.162497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.162509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.162601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.162610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.162707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.162717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.162878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.162888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.163958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.163969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.164145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.164155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.164301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.164313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.164477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.164487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.164702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.164733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.164939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.164969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.165191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.165221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.165383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.165414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.165635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.165668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.165816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-07-15 18:52:02.165846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-07-15 18:52:02.166142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.166172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.166335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.166366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.166616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.166646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.166868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.166880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.166991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.167002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.167253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.167284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.167506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.167536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.167748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.167759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.167864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.167875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.168037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.168048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.168157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.168188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.168411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.168444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.168610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.168640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.168769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.168779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.168898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.168908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.169085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.169115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.169280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.169311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.169466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.169498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.169645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.169655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.169765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.169776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.169942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.169953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.170073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.170102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.170255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.170287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.170462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.170491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.170631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.170661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.170819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.170848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.170992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.171002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.171104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.171114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.171210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.171220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.171350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.171361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.171642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.171672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.171828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.171858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.172048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.172078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.172290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.172321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-07-15 18:52:02.172539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-07-15 18:52:02.172569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.172854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.172865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.173172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.173202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.173360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.173391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.173604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.173634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.173796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.173807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.173911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.173936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.174092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.174125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.174290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.174321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.174533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.174568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.174784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.174814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.175108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.175138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.175297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.175329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.175489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.175520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.175677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.175981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.176010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.176223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.176267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.176504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.176534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.176700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.176710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.176882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.176913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.177160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.177190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.177355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.177386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.177610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.177640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.177795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.177805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.178082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.178093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.178345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.178376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.178528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.178559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.178708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.178738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.178890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.178919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.179036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.179047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.179318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.179328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.179444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.179454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.179569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.179580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.179767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.179798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.180004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.180033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.180171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.180200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.180506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.180537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.180776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.180807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.180970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.180980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.181099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.181110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.181333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-07-15 18:52:02.181344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-07-15 18:52:02.181522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.181552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.181697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.181728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.181953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.181983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.182121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.182132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.182294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.182305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.182548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.182578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.182721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.182752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.183504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.183523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.183708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.183721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.183883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.183893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.184070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.184080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.184176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.184185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.184415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.184425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.184587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.184599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.184776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.184786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.184912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.184922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.185034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.185044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.185154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.185164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.185242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.185252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.185412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.185423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.185535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.185545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.185766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.185777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.185909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.185919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.186029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.186040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.186151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.186162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.186344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.186357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.186529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.186541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.186671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.186683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.186791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.186801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.186974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.186985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.187967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.187977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.188094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.188104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.188218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.188232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.772 [2024-07-15 18:52:02.188404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.772 [2024-07-15 18:52:02.188415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.772 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.188578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.188589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.188684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.188694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.188809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.188819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.188931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.188941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.189109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.189119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.189232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.189243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.189351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.189362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.189465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.189478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.189661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.189672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.189836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.189845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.190944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.190954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.191055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.191066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.191157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.191167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.191270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.191281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.191393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.191403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.191569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.191579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.191706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.191716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.191868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.191879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.192052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.192062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.192148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.192158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.192256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.192266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.192364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.192374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.192538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.192549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.192645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.192655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.192879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.192888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.193055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.193067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.193174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.193184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.193364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.193375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.193534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.193545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.193722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.193732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.193832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.193843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.193940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.193950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.194046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.194056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.773 qpair failed and we were unable to recover it. 00:26:45.773 [2024-07-15 18:52:02.194215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.773 [2024-07-15 18:52:02.194230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.194378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.194388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.194535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.194546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.194773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.194783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.194887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.194897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.194997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.195104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.195234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.195352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.195454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.195574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.195652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.195840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.195850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.196020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.196030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.196189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.196199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.196293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.196304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.196472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.196482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.196673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.196684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.196782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.196792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.196951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.196960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.197227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.197238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.197344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.197354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.197540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.197551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.197689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.197699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.197805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.197815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.197989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.198000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.198134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.198144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.198306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.198316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.198413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.198423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.198592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.198602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.198709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.198719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.198898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.198908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.199086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.199096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.199257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.199267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.199363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.199374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.774 qpair failed and we were unable to recover it. 00:26:45.774 [2024-07-15 18:52:02.199489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.774 [2024-07-15 18:52:02.199500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.199592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.199602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.199708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.199719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.199887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.199897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.199993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.200003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.200099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.200109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.200279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.200289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.200415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.200425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.200540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.200551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.200646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.200656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.200814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.200825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.200999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.201009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.201094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.201106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.201289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.201299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.201460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.201470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.201574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.201584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.201852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.201862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.201931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.201940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.202017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.202027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.202141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.202151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.202247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.202257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.202364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.202374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.202578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.202588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.202711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.202721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.202839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.202849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.203027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.203037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.203268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.203278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.203454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.203464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.203600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.203609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.203688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.203697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.203969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.203979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.204144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.204154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.204316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.204326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.204489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.204499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.204753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.204763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.204988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.204998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.205168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.205178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.205276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.205287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.775 qpair failed and we were unable to recover it. 00:26:45.775 [2024-07-15 18:52:02.205446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.775 [2024-07-15 18:52:02.205457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.205565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.205575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.205677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.205687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.205780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.205790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.205960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.205970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.206078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.206087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.206200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.206210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.206337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.206347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.206444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.206454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.206586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.206596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.206783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.206793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.206972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.206982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.207955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.207964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.208918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.208928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.209030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.209039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.209266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.209276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.209502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.209512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.209608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.209618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.209818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.209828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.209909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.209926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.210190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.210200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.210444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.210454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.210586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.210596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.210705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.776 [2024-07-15 18:52:02.210716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.776 qpair failed and we were unable to recover it. 00:26:45.776 [2024-07-15 18:52:02.210801] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:45.776 [2024-07-15 18:52:02.210822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.210835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.210849] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.777 [2024-07-15 18:52:02.211071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.211083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.211185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.211195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.211356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.211365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.211478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.211488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.211678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.211689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.211806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.211816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.211902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.211912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.212025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.212036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.212130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.212140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.212306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.212317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.212504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.212515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.212689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.212700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.212800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.212811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.212910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.212920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.213117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.213152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.213420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.213456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.213678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.213711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.213824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.213836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.214124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.214136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.214247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.214257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.214432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.214442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.214630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.214641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.214871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.214881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.215145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.215155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.215322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.215332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.215514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.215525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.215706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.215716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.215809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.215819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.215993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.216896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.216990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.777 [2024-07-15 18:52:02.217000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.777 qpair failed and we were unable to recover it. 00:26:45.777 [2024-07-15 18:52:02.217173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.217184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.217422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.217432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.217515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.217525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.217800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.217810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.217924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.217935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.218030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.218040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.218273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.218283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.218393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.218404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.218516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.218526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.218755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.218765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.218880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.218890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.219079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.219089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.219206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.219216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.219389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.219400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.219588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.219598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.219693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.219703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.219868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.219878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.219991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.220003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.220108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.220119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.220316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.220327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.220497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.220507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.220734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.220745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.220845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.220855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.221021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.221031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.221148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.221158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.221325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.221335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.221496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.221506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.221664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.221674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.221809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.221819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.222060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.222070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.222132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.222142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.222256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.222266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.222431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.222442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.778 [2024-07-15 18:52:02.222615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.778 [2024-07-15 18:52:02.222626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.778 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.222851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.222862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.223027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.223037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.223214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.223229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.223403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.223413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.223617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.223627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.223717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.223727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.223810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.223821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.223928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.223939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.224114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.224125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.224302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.224312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.224520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.224530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.224714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.224725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.224833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.224844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.225071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.225082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.225256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.225267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.225495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.225506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.225637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.225668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.225939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.225969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.226097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.226127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.226274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.226285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.226382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.226392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.226557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.226567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.226689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.226719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.226833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.226868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.227028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.227059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.227282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.227314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.227532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.227567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.227732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.227762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.227969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.227999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.228146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.779 [2024-07-15 18:52:02.228157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.779 qpair failed and we were unable to recover it. 00:26:45.779 [2024-07-15 18:52:02.228273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.228284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.228469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.228479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.228580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.228610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.228773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.228805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.229026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.229057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.229213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.229223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.229382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.229414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.229744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.229775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.229989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.230020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.230242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.230269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.230450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.230461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.230632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.230662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.230952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.230983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.231147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.231157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.231321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.231333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.231416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.231426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.231606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.231617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.231798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.231829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.232038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.232069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.232237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.232268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.232482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.232514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.232685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.232717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.233004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.233035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.233236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.233268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.233470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.233501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.233727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.233757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.233889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.233920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.234140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.234150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.234408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.780 [2024-07-15 18:52:02.234440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.780 qpair failed and we were unable to recover it. 00:26:45.780 [2024-07-15 18:52:02.234648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.234679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.234883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.234912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.235158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.235189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.235534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.235567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.235728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.235763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.235925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.235954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.236163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.236174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.236356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.236367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.236610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.236640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.236858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.236888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.237038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.237080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.237242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.237271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.237429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.237439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.237560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.237571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.237706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.237716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.237884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.237896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.238005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.238015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.238185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.238195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.238308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.238320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.238501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.238512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.238759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.238769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.238942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.238952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.239122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.239133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.239283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.239294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.239405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.781 [2024-07-15 18:52:02.239415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.781 qpair failed and we were unable to recover it. 00:26:45.781 [2024-07-15 18:52:02.239539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.239549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.239800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.239810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.239989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.239999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.240107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.240117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.240237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.240247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.240381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.240392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.240588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.782 [2024-07-15 18:52:02.240618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.240852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.240882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.241110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.241140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.241370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.241401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.241563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.241593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.241832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.241862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.242022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.242053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.242268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.242299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.242591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.242621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.242858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.242888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.243051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.243061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.243246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.243256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.243432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.243443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.243564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.243575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.243645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.243655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.243834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.243844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.243938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.243948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.244075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.244085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.244255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.244266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.244446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.244456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.244565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.244574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.244736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.244746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.244922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.244932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.245218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.245232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.245350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.245360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.245473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.245483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.245590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.782 [2024-07-15 18:52:02.245603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.782 qpair failed and we were unable to recover it. 00:26:45.782 [2024-07-15 18:52:02.245703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.245713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.245813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.245823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.245951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.245961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.246200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.246210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.246373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.246384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.246487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.246497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.246636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.246646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.246772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.246782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.246911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.247115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.247125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.247294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.247305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.247474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.247484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.247599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.247609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.247786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.247796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.247874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.247884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.247992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.248103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.248241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.248445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.248562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.248684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.248869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.248974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.248984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.249083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.249093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.249254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.249264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.249427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.249438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.249547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.249558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.249786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.249796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.249975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.249986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.250162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.250172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.250252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.250262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.783 [2024-07-15 18:52:02.250375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.783 [2024-07-15 18:52:02.250385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.783 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.250504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.250515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.250683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.250694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.250940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.250950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.251169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.251179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.251289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.251299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.251426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.251436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.251683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.251693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.251894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.251907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.252843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.252853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.253079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.253089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.253195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.253205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.253369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.253380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.253631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.253641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.253734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.253744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.253942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.253952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.254035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.254045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.254207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.254218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.254384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.254395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.254517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.254527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.254704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.254714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.254883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.254894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.255007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.255017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.255179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.255189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.255360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-07-15 18:52:02.255370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.784 qpair failed and we were unable to recover it. 00:26:45.784 [2024-07-15 18:52:02.255548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.255558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.255736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.255746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.255868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.255878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.255976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.255988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.256094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.256104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.256245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.256255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.256357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.256368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.256612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.256622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.256792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.256802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.256896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.256906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.257145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.257155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.257336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.257346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.257528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.257538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.257644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.257654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.257883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.257893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.258018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.258028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.258165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.258175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.258428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.258439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.258670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.258680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.258843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.258853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.258936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.258946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.259110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.259120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.259365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.259375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.259497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.259507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.259691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.259702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.259870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.259880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.259986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.259996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.260104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.260114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.785 [2024-07-15 18:52:02.260209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.785 [2024-07-15 18:52:02.260219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.785 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.260386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.260396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.260517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.260527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.260778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.260789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.261020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.261030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.261195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.261206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.261323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.261349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.261477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.261488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.261593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.261603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.261830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.261840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.262075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.262085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.262260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.262270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.262470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.262480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.262563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.262572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.262746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.262757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.263003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.263015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.263146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.263156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.263261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.263271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.263469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.263479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.263595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.263605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.263701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.263711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.263848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.263858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.264017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.264027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.264228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.264239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.264487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.264497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.264745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.264756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.264843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.264853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.265027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.265037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.265213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.265223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.265416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.265426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.265528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.265539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.265622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.265632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.265838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.265848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.266090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.266100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.266260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.266270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.266353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.266363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.266538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.266548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.266800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.266810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.786 [2024-07-15 18:52:02.266930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.786 [2024-07-15 18:52:02.266940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.786 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.267219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.267242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.267337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.267348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.267506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.267516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.267641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.267651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.267845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.267855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.268027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.268037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.268185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.268195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.268391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.268401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.268513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.268524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.268703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.268713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.268931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.268942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.269021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.269031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.269163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.269172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.269281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.269292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.269517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.269527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.269643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.269655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.269844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.269856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.269949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.269959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.270143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.270153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.270262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.270273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.270371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.270381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.270583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.270593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.270711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.270721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.270835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.270845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.271008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.271018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.271247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.271257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.271435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.271445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.271565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.271575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.271770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.271780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.271884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.271896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.272006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.787 [2024-07-15 18:52:02.272016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.787 qpair failed and we were unable to recover it. 00:26:45.787 [2024-07-15 18:52:02.272140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.272151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.272332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.272342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.272595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.272606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.272726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.272738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.272905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.272916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.273017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.273029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.273192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.273202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.273430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.273440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.273617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.273628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.273790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.273800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.273954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.273964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.274065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.274075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.274329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.274339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.274424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.274435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.274602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.274612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.274773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.274783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.274915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.274925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.275097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.275108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.275306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.275316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.275485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.275496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.275664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.275675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.275790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.275801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.275891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.275901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.276071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.276082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.276244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.276255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.276358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.276370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.276473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.276484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.276716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.276727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.276838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.276849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.276957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.276967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.277160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.277170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.277245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.277256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.277364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.277375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.277634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.277644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.277807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.277817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.277911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.277922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.278084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.788 [2024-07-15 18:52:02.278095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.788 qpair failed and we were unable to recover it. 00:26:45.788 [2024-07-15 18:52:02.278267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.278278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.278438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.278448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.278559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.278569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.278763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.278774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.278880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.278892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.279003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.279013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.279187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.279197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.279470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.279481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.279582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.279593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.279700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.279710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.279868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.279880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.280042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.280052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.280247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.280257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.280431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.280442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.280536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.280546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.280728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.280739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.280905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.280914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.281975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.281985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.282141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.282151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.282251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.282262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.282441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.282454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.282557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.282567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.282657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.282666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.282827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.282838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.282967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.282977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.283137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.283147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.789 [2024-07-15 18:52:02.283261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.789 [2024-07-15 18:52:02.283272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.789 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.283384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.283394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.283516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.283527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.283691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.283701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.283887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.283897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.283999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.284008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.284109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.284118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.284292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.284303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.284409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.284420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.284620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.284631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.284794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.284804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.284931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.284941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.790 [2024-07-15 18:52:02.285618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.285967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.285977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.286100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.286111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.286298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.286309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.286467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.286477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.286639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.286650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.286742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.286752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.286875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.286885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.287092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.287109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.287286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.287297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.287399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.287410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.287667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.287677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.287780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.287790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.287904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.287914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.287980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.287991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.288155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.288166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.288293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.288303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.288497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.288507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.288682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.790 [2024-07-15 18:52:02.288693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.790 qpair failed and we were unable to recover it. 00:26:45.790 [2024-07-15 18:52:02.288856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.288867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.289062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.289073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.289240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.289251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.289504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.289515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.289641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.289652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.289834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.289845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.290017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.290029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.290256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.290267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.290393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.290404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.290573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.290584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.290682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.290696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.290943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.290955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.291115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.291126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.291291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.291303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.291472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.291483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.291647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.291659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.291910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.291922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.292107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.292118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.292357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.292368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.292489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.292501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.292702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.292713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.292953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.292964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.293139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.293151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.293331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.293343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.293538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.293551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.293739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.293751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.293833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.293844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.294017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.294029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.294233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.294245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.294339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.294350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.294466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.294478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.294599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.294611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.294776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.294787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.294912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.294923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.295086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.295098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.295325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.295338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.791 [2024-07-15 18:52:02.295528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.791 [2024-07-15 18:52:02.295540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.791 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.295730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.295742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.295920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.295930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.296124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.296135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.296239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.296249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.296429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.296440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.296617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.296628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.296724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.296733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.296881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.296891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.297065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.297076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.297302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.297313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.297473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.297484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.297597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.297607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.297713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.297724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.297898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.297911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.298043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.298054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.298215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.298229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.298345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.298357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.298609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.298620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.298788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.298799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.298957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.298968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.299079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.299090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.299184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.299194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.299303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.299315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.299474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.299484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.299655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.299665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.299844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.299855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.300038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.300048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.300216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.300230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.300404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.300415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.300584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.300595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.300772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.300782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.300951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.300961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.301172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.301182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.301331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.301342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.301447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.301457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.301617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.301627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.301800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.301811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.301990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.302001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.302113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.302123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.792 [2024-07-15 18:52:02.302234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.792 [2024-07-15 18:52:02.302245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.792 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.302361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.302372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.302544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.302555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.302743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.302753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.302865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.302877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.303012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.303022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.303136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.303148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.303329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.303340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.303532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.303543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.303705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.303716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.303834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.303845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.303931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.303942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.304058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.304068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.304231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.304243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.304409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.304423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.304528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.304539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.304718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.304729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.304852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.304862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.305026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.305036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.305206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.305217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.305316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.305328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.305561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.305572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.305686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.305697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.305779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.305790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.305973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.305984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.306211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.306222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.306341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.306352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.306466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.306476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.306650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.306661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.306729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.306740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.306944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.306956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.307129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.307140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.307343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.307355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.307538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.307549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.307729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.307740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.307910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.307920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.308008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.308018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.308177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.308187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.308310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.308320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.308437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.308447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.793 qpair failed and we were unable to recover it. 00:26:45.793 [2024-07-15 18:52:02.308540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.793 [2024-07-15 18:52:02.308551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.308726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.308738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.308916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.308927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.309050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.309061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.309150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.309160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.309366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.309377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.309487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.309498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.309605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.309616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.309780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.309791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.309889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.309901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.310045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.310056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.310158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.310168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.310260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.310272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.310437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.310447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.310609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.310622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.310737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.310748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.310851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.310861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.311020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.311029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.311142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.311152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.311337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.311347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.311527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.311538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.311653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.311665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.311827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.311836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.312018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.312028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.312122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.312132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.312404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.312415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.312590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.312600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.312712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.312722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.312832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.312843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.313005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.313015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.313128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.313138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.313310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.313321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.313484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.794 [2024-07-15 18:52:02.313493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.794 qpair failed and we were unable to recover it. 00:26:45.794 [2024-07-15 18:52:02.313658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.313669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.313774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.313784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.313949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.313959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.314118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.314129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.314245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.314256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.314483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.314494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.314606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.314618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.314713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.314723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.314864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.314902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.315118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.315154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.315395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.315429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.315572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.315584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.315782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.315793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.315954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.315965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.316143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.316153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.316355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.316366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.316481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.316492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.316660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.316671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.316880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.316891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.317004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.317014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.317190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.317201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.317299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.317311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.317405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.317415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.317583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.317593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.317822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.317833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.317954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.317965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.318127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.318138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.318405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.318416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.318621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.318632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.318746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.318757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.318939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.318950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.319068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.319079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.319287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.319298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.319455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.319466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.319698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.319709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.319824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.319835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.319945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.319954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.320118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.320129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.320229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.320241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.795 [2024-07-15 18:52:02.320325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.795 [2024-07-15 18:52:02.320336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.795 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.320438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.320448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.320702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.320713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.320881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.320892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.321154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.321165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.321312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.321324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.321498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.321509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.321618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.321629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.321695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.321706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.321970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.321988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.322154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.322173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.322359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.322378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.322551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.322563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.322730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.322742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.322900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.322910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.323046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.323057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.323284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.323295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.323470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.323481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.323655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.323666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.323839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.323851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.324017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.324029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.324097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.324108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.324303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.324315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.324560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.324571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.324740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.324752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.324895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.324908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.325087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.325098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.325343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.325355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.325526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.325538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.325784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.325796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.325894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.325906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.326034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.326046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.326206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.326218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.326394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.326407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.326583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.326596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.326757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.326770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.326890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.326901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.327062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.327073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.327180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.327191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.796 qpair failed and we were unable to recover it. 00:26:45.796 [2024-07-15 18:52:02.327307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.796 [2024-07-15 18:52:02.327318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.327558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.327570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.327801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.327814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.327973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.327985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.328158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.328170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.328274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.328286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.328508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.328519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.328689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.328700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.328873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.328885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.329020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.329032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.329307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.329323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.329504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.329517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.329726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.329737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.329900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.329911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.330089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.330101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.330218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.330244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.330478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.330491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.330585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.330596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.330784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.330795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.330897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.330907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.331134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.331146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.331313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.331325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.331488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.331500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.331751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.331763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.331862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.331873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.331981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.331990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.332081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.332091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.332187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.332197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.332306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.332316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.332479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.332489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.332617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.332626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.332857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.332867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.333011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.333021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.333208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.333218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.333404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.333414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.333584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.333594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.333835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.797 [2024-07-15 18:52:02.333845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.797 qpair failed and we were unable to recover it. 00:26:45.797 [2024-07-15 18:52:02.333964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.333973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.334098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.334108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.334249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.334260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.334436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.334446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.334648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.334658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.334816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.334826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.334929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.334939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.335113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.335123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.335236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.335246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.335417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.335427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.335616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.335626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.335803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.335813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.335915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.335925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.336197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.336208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.336319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.336329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.336498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.336508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.336691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.336701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.336934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.336944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.337012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.337022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.337253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.337264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.337458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.337468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.337574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.337584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.337764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.337774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.337934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.337944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.338051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.338061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.338242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.338252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.338382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.338392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.338504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.338514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.338668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.338678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.338872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.338882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.338990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.339000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.798 [2024-07-15 18:52:02.339182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.798 [2024-07-15 18:52:02.339192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.798 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.339306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.339317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.339491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.339502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.339650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.339660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.339791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.339801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.339913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.339923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.340037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.340047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.340216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.340229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.340398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.340408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.340591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.340601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.340725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.340735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.340932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.340942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.341130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.341236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.341360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.341532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.341610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.341742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.341932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.341994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.799 [2024-07-15 18:52:02.342004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.799 qpair failed and we were unable to recover it. 00:26:45.799 [2024-07-15 18:52:02.342113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.342123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.342286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.342296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.342395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.342408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.342589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.342599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.342686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.342698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.342801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.342812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.342917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.342927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.343007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.343019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.343147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.343157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.343332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.343342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.343458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.343468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.343580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.343590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.343699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.343708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.343889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.343899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.344014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.344024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.344120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.344130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.344308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.344318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.344502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.344511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.344690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.344700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.344878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.344888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.345006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.345016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.345198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.345208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.345305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.345315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.345538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.345548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.345709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.345719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.345850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.345860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.345974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.345984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.346156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.346166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.346344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.346354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.346541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.346551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.800 [2024-07-15 18:52:02.346712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.800 [2024-07-15 18:52:02.346722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.800 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.346811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.346821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.347073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.347082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.347343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.347354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.347458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.347468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.347645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.347654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.347766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.347776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.347981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.347991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.348087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.348097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.348285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.348295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.348392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.348401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.348561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.348571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.348753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.348764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.348925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.348935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.349052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.349062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.349231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.349241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.349417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.349426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.349609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.349619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.349775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.349785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.349912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.349922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.350091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.350101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.350338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.350348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.350521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.350531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.350643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.350653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.350816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.350826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.350938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.350948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.351144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.351154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.351263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.351273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.351364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.351374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.351481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.351491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.351675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.351685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.351911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.351921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.352105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.352114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.352305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.352328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.352424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.352434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.352612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.352622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.352805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.352815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.352939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.801 [2024-07-15 18:52:02.352948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.801 qpair failed and we were unable to recover it. 00:26:45.801 [2024-07-15 18:52:02.353113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.353123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.353218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.353231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.353346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.353356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.353579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.353589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.353761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.353771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.353932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.353942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.354106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.354116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.354370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.354380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.354550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.354560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.354651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.354661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.354833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.354843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.355025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.355035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.355204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.355214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.355342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.355352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.355599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.355610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.355841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.355851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.355956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.355966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.356168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.356178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.356281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.356291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.356399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.356409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.356585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.356595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.356772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.356782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.356866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.356876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.356998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.357008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.357169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.357179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.357345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.357355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.357466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.357476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.357705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.357715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.357821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.357832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.357942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.357954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.358135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.358147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.358232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.358243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.358360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.358370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.358533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.358543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.358664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.358674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.358837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.358847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.359051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.359061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.359153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.359163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.359335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.359345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.359451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.359461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.359523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.359533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.359762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.802 [2024-07-15 18:52:02.359772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.802 qpair failed and we were unable to recover it. 00:26:45.802 [2024-07-15 18:52:02.360026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.360037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.360157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.360168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.360271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.360282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.360397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.360408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.360586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.360597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.360854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.360865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.361101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.361112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.361289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.361300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.361411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.361421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.361576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.361569] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.803 [2024-07-15 18:52:02.361587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.361600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.803 [2024-07-15 18:52:02.361607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.803 [2024-07-15 18:52:02.361613] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.803 [2024-07-15 18:52:02.361619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.803 [2024-07-15 18:52:02.361727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:45.803 [2024-07-15 18:52:02.361863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.361873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.361832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:45.803 [2024-07-15 18:52:02.361959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:45.803 [2024-07-15 18:52:02.361960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:45.803 [2024-07-15 18:52:02.362058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.362068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.362194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.362204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.362374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.362385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.362505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.362515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.362691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.362701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.362933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.362944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.363057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.363067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.363245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.363256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.363424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.363434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.363699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.363710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.363873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.363883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.364021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.364033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.364129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.364139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.364250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.364260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.364378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.364388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.364505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.364516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.364691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.364702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.364892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.364902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.365007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.365017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.365195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.365205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.365435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.365446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.365525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.365535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.365629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.365639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.365833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.365843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.366001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.366011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.366243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.366253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.366472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.366482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.366594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.366604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.366778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.366788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.803 qpair failed and we were unable to recover it. 00:26:45.803 [2024-07-15 18:52:02.366900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.803 [2024-07-15 18:52:02.366910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.367025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.367035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.367146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.367156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.367346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.367356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.367521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.367531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.367704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.367714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.367877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.367887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.368072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.368083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.368247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.368257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.368483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.368507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.368788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.368815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.368958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.368983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.369184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.369195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.369387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.369397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.369506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.369516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.369741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.369751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.369939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.369949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.370124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.370134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.370299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.370310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.370484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.370494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.370597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.370607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.370836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.370846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.371042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.371052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.371147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.371157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.371342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.371352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.371521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.371531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.371691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.371702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.371817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.371827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.372051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.372061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.372270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.372280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.372452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.372462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.372556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.372566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.372755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.372765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.372944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.372954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.373035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.373045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.373156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.373166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.373281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.373291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.373468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.804 [2024-07-15 18:52:02.373478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.804 qpair failed and we were unable to recover it. 00:26:45.804 [2024-07-15 18:52:02.373655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.373665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.373779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.373790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.373960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.373970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.374130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.374140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.374328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.374339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.374454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.374463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.374572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.374581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.374740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.374750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.374919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.374929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.375180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.375190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.375305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.375315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.375420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.375435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.375612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.375623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.375783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.375793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.375964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.375974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.376137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.376147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.376252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.376262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.376425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.376436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.376601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.376611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.376710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.376720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.376970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.376980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.377130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.377140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.377317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.377327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.377446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.377457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.377615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.377625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.377793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.377803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.378059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.378069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.378348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.378359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.378593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.378604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.378780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.378791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.378956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.378966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.379138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.379148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.379335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.379344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.379535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.379545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.379718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.379728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.379931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.379941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.380111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.380121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.380301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.380312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.380484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.380494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.380618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.380629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.380830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.380841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.381070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.381081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.381246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.381258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.381393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.381404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.381580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.381591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.381862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.381875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.381983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.805 [2024-07-15 18:52:02.381993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.805 qpair failed and we were unable to recover it. 00:26:45.805 [2024-07-15 18:52:02.382238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.382248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.382371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.382381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.382495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.382506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.382732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.382743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.382901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.382914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.383093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.383103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.383299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.383310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.383449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.383459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.383579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.383589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.383763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.383773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.383883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.383893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.384003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.384015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.384123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.384133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.384306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.384318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.384414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.384426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.384531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.384541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.384713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.384723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.384892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.384903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.385027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.385039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.385140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.385152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.385379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.385391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.385645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.385657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.385828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.385839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.386069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.386082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.386263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.386274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.386454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.386466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.386569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.386580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.386752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.386764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.386926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.386937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.387015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.387025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.387186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.387198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.387382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.387393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.387505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.387516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.387678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.387690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.387864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.387876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.387988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.387999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.388174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.388186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.388298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.388309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.388485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.388497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.388680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.388691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.388870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.388882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.806 qpair failed and we were unable to recover it. 00:26:45.806 [2024-07-15 18:52:02.389936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.806 [2024-07-15 18:52:02.389947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.390140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.390151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.390346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.390358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.390472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.390482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.390653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.390664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.390770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.390781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.390944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.390954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.391072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.391083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.391244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.391256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.391438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.391448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.391624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.391635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.391739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.391750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.391926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.391936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.392046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.392057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.392249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.392261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.392370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.392380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.392551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.392562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.392732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.392743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.392942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.392953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.393126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.393138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.393314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.393326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.393422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.393433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.393532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.393542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.393726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.393736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.393897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.393907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.394078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.394089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.394157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.394167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.394238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.394248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.394443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.394453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.394552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.394562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.394723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.394734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.394831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.394842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.395084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.395094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.395194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.395204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.395309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.395320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.395516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.395527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.395693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.395706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.395866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.395877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.396075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.396212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.396222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.396406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.807 [2024-07-15 18:52:02.396416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.807 qpair failed and we were unable to recover it. 00:26:45.807 [2024-07-15 18:52:02.396693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.396704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.396798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.396809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.396984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.396995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.397163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.397174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.397307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.397317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.397548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.397560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.397789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.397801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.398035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.398045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.398220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.398234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.398351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.398362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.398534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.398543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.398661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.398671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.398767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.398776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.398900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.398910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.399080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.399202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.399390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.399495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.399613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.399805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.399898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.399992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.400001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.400150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.400160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.400332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.400342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.400535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.400545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.400655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.400665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.400892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.400902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.401068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.401078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.401185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.401195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.401299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.401309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.401417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.401428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.401527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.808 [2024-07-15 18:52:02.401537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.808 qpair failed and we were unable to recover it. 00:26:45.808 [2024-07-15 18:52:02.401710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.401721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.401882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.401892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.402054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.402064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.402186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.402199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.402359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.402370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.402489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.402499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.402615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.402624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.402743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.402753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.402868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.402878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.403038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.403048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.403214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.403228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.403392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.403402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.403581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.403592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.403689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.403699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.403798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.403809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.403915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.403925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.404031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.404041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.404159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.404169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.404274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.404284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.404461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.404472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.404630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.404640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.404824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.404834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.405016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.405027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.405202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.405212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.405328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.405339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.405581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.405591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.405715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.405725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.405832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.405843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.406012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.406022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.406255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.406267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.406370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.406381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.406559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.406570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.406684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.406695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.406898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.406909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.407031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.407041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.407282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.407293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.407461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.407472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.407641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.407652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.407906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.407916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.408088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.408098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.809 qpair failed and we were unable to recover it. 00:26:45.809 [2024-07-15 18:52:02.408278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.809 [2024-07-15 18:52:02.408289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.408459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.408470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.408578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.408588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.408839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.408854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.409056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.409066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.409175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.409186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.409371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.409382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.409638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.409649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.409831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.409843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.409940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.409951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.410079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.410089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.410272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.410283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.410469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.410479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.410589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.410599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.410680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.410690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.410859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.410869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.411118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.411128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.411338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.411349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.411540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.411551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.411670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.411681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.411843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.411853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.411974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.411984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.412078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.412088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.412252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.412262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.412446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.412457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.412571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.412581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.412764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.412775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.412884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.412894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.413067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.413078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.413174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.413185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.413407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.413441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.413686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.413700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.413917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.413930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.414058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.414071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.414252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.414268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.414552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.414567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.414776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.414790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.414989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.415003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.415193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.415207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.415459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.415473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.415681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.810 [2024-07-15 18:52:02.415695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.810 qpair failed and we were unable to recover it. 00:26:45.810 [2024-07-15 18:52:02.415934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.415947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.416075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.416089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.416261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.416276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.416549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.416563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.416748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.416762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.416930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.416944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.417156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.417170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.417348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.417363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.417494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.417508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.417700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.417714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.417851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.417865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.418106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.418119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.418237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.418252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.418464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.418477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.418709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.418723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.418961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.418975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.419116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.419136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.419319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.419333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.419511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.419526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.419629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.419643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.419867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.419881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.420064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.420078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.420192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.420206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.420414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.420428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.420594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.420607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.420732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.420746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.420859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.420873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.421055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.421069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.421241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.421255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.421425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.421439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.421689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.421703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.421777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.421791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.422025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.422038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.422154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.422167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.422297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.422312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.422548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.422562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.422732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.422745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.422932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.422946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.423117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.423132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.423312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.811 [2024-07-15 18:52:02.423326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.811 qpair failed and we were unable to recover it. 00:26:45.811 [2024-07-15 18:52:02.423522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.423535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.423730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.423744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.423984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.423998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.424101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.424118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.424295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.424309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.424484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.424497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.424733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.424747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.425003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.425017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.425188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.425202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.425412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.425426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.425616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.425630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.425814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.425828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.425993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.426007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.426136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.426149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.426412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.426427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.426552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.426566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.426775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.426789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.426970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.426994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.427177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.427190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.427296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.427307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.427499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.427509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.427686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.427695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.427812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.427822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.427940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.427949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.428064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.428074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.428258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.428268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.428338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.428348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.428541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.428551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.428673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.428682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.428885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.428895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.429061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.429073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.429237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.429247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.429428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.812 [2024-07-15 18:52:02.429438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.812 qpair failed and we were unable to recover it. 00:26:45.812 [2024-07-15 18:52:02.429686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.429695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.429921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.429931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.430090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.430100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.430330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.430340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.430521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.430531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.430706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.430717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.430889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.430899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.431101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.431111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.431351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.431361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.431483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.431493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.431595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.431605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.431783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.431793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.431976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.431986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.432117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.432127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.432233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.432243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.432337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.432347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.432522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.432531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.432693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.432703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.432816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.432826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.433031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.433040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.433147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.433157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.433265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.433275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.433460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.433470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.433585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.433595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.433703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.433713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.433837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.433847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.434076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.434085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.434193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.434203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.434389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.434399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.434587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.434596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.434706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.434716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.434841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.434851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.435027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.435037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.435171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.435180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.435286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.435296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.435469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.435479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.435723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.435733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.435846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.435858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.436019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.436029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.436255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.436264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.813 qpair failed and we were unable to recover it. 00:26:45.813 [2024-07-15 18:52:02.436424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.813 [2024-07-15 18:52:02.436433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.436546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.436556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.436783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.436793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.436919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.436929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.437177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.437186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.437347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.437358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.437500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.437510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.437743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.437753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.437943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.437953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.438063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.438073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.438300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.438311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.438508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.438522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.438788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.438799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.438963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.438973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.439147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.439157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.439384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.439395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.439585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.439595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.439759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.439769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.439944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.439955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.440102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.440112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.440261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.440271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.440443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.440453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.440707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.440717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.440848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.440858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.441068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.441090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.441214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.441251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.441446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.441460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.441642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.441657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.441781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.441795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.441981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.441995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.442123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.442136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.442323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.442337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.442514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.442528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.442706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.442720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.442974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.442988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:45.814 [2024-07-15 18:52:02.443194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.814 [2024-07-15 18:52:02.443208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:45.814 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.443473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.443488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.443676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.443690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.443867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.443880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.444006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.444021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.444149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.444163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.444343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.444358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.444540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.444554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.444731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.444745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.444861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.444875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.101 [2024-07-15 18:52:02.445041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.101 [2024-07-15 18:52:02.445055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.101 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.445234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.445244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.445418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.445429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.445594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.445604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.445702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.445712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.445851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.445862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.446033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.446046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.446231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.446242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.446360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.446370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.446532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.446541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.446771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.446781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.446900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.446910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.447013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.447023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.447129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.447138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.447239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.447249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.447399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.447408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.447503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.447513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.447735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.447745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.447810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.447820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.448023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.448033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.448209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.448219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.448356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.448367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.448475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.448485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.448603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.448613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.448793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.448803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.448969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.448979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.449209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.449218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.449338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.449348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.449514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.449523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.449626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.449635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.449816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.449826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.449936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.449946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.450197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.450207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.450339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.450349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.450509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.450518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.450666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.450676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.102 [2024-07-15 18:52:02.450888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.102 [2024-07-15 18:52:02.450898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.102 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.451007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.451017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.451245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.451255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.451507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.451517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.451617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.451627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.451733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.451743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.451913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.451922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.452032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.452041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.452166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.452176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.452280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.452290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.452466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.452478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.452703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.452713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.452871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.452880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.453053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.453062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.453178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.453188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.453313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.453324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.453506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.453516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.453676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.453685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.453844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.453854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.453976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.453986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.454106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.454115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.454201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.454211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.454322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.454332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.454456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.454466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.454630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.454640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.454798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.454808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.454989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.454999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.455192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.455202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.455450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.455460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.455578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.455588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.455750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.455760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.455866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.455876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.456106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.456116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.456217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.456230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.456427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.456436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.456600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.456610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.456864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.456874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.456990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.457000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.457232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.103 [2024-07-15 18:52:02.457242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.103 qpair failed and we were unable to recover it. 00:26:46.103 [2024-07-15 18:52:02.457349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.457359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.457531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.457541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.457719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.457729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.457923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.457933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.458042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.458052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.458242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.458252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.458359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.458369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.458567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.458577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.458826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.458835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.458914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.458923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.459117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.459126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.459300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.459315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.459492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.459502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.459664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.459674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.459857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.459867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.460026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.460036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.460262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.460272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.460530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.460539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.460720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.460730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.460861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.460870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.461052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.461062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.461222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.461234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.461354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.461364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.461537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.461547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.461739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.461748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.461924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.461934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.462004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.462013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.462199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.462209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.462323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.462334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.462536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.462546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.462738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.462747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.462934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.462944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.463106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.463116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.463293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.463303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.463455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.463465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.463582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.463591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.463818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.104 [2024-07-15 18:52:02.463827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.104 qpair failed and we were unable to recover it. 00:26:46.104 [2024-07-15 18:52:02.463945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.463955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.464129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.464139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.464266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.464276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.464551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.464561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.464840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.464850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.464963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.464973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.465080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.465090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.465184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.465194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.465387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.465397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.465623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.465633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.465736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.465745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.465906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.465916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.466075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.466085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.466193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.466203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.466390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.466402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.466576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.466586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.466765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.466775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.466866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.466875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.467038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.467048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.467121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.467130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.467244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.467254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.467477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.467487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.467690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.467700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.467797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.467807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.467918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.467928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.468167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.468177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.468353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.468363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.468534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.468544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.468661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.468671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.468922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.468932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.469111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.469121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.469251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.469261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.469380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.469390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.469565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.469575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.469745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.469755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.105 [2024-07-15 18:52:02.469960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.105 [2024-07-15 18:52:02.469970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.105 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.470147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.470157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.470274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.470285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.470473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.470483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.470657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.470667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.470787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.470796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.470976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.470986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.471162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.471172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.471247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.471257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.471417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.471427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.471539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.471549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.471708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.471718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.471949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.471959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.472124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.472134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.472364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.472375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.472503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.472512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.472635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.472645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.472757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.472767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.472937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.472947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.473171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.473182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.473409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.473419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.473578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.473588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.473688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.473698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.473867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.473877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.473986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.473996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.474223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.474238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.474418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.474428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.106 [2024-07-15 18:52:02.474521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.106 [2024-07-15 18:52:02.474531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.106 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.474642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.474652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.474820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.474829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.475077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.475087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.475183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.475193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.475379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.475389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.475541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.475551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.475727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.475737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.475921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.475930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.476094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.476103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.476211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.476221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.476379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.476389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.476499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.476621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.476631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.476807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.476817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.476994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.477004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.477135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.477144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.477256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.477266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.477433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.477443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.477549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.477559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.477786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.477796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.477972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.477982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.478098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.478109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.478206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.478216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.478464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.478473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.478568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.478578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.478831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.478841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.478944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.478954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.479053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.479062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.479287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.479296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.479474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.479483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.479667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.479678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.479779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.479791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.479890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.479900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.480085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.480095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.480215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.480229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.480408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.480418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.107 qpair failed and we were unable to recover it. 00:26:46.107 [2024-07-15 18:52:02.480647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.107 [2024-07-15 18:52:02.480657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.480817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.480827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.480947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.480957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.481210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.481220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.481337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.481347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.481514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.481525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.481634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.481644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.481830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.481840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.482067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.482077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.482265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.482275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.482439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.482449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.482671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.482681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.482784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.482794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.483041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.483051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.483149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.483158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.483324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.483335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.483506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.483516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.483634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.483644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.483817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.483827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.484079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.484089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.484254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.484265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.484442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.484452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.484633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.484643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.484757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.484767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.485026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.485036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.485293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.485303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.485405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.485415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.485521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.485531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.485694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.485703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.485985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.485994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.486119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.486129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.486309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.486320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.486427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.486437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.108 [2024-07-15 18:52:02.486541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.108 [2024-07-15 18:52:02.486551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.108 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.486655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.486664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.486768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.486780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.486939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.486949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.487129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.487139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.487220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.487235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.487415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.487425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.487536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.487546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.487655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.487665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.487844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.487854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.488045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.488055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.488230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.488240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.488347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.488357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.488584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.488594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.488702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.488712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.488872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.488882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.489066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.489076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.489259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.489269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.489453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.489463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.489737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.489746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.489908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.489918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.490093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.490103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.490276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.490287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.490491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.490501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.490631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.490641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.490813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.490824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.491002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.491012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.491123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.491133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.491241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.491251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.491355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.491366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.491593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.491602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.491769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.491779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.491938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.491948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.492064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.492074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.492253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.492264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.109 qpair failed and we were unable to recover it. 00:26:46.109 [2024-07-15 18:52:02.492519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.109 [2024-07-15 18:52:02.492529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.492641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.492650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.492834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.492844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.492940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.492949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.493114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.493124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.493363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.493373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.493551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.493561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.493670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.493683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.493875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.493885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.494104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.494114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.494222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.494236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.494352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.494362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.494551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.494560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.494767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.494777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.494968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.494978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.495166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.495176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.495348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.495358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.495471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.495481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.495650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.495660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.495892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.495902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.496003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.496013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.496202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.496211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.496432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.496443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.496625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.496635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.496809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.496819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.497011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.497021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.497131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.497141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.497334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.497345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.497507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.497517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.497771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.497781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.497883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.497892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.498126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.498135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.498260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.498270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.498393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.498403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.498575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.498585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.498820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.498830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.110 qpair failed and we were unable to recover it. 00:26:46.110 [2024-07-15 18:52:02.498921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.110 [2024-07-15 18:52:02.498931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.499109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.499119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.499316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.499326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.499551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.499560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.499736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.499746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.499955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.499965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.500146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.500156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.500382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.500392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.500567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.500577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.500773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.500783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.500956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.500965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.501075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.501086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.501265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.501275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.501409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.501418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.501540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.501550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.501740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.501750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.501875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.501885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.502152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.502162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.502355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.502366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.502448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.502457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.502647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.502657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.502889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.502898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.503071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.503081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.503213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.503223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.503348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.503358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.503539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.503549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.503657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.503667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.503843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.111 [2024-07-15 18:52:02.503852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.111 qpair failed and we were unable to recover it. 00:26:46.111 [2024-07-15 18:52:02.504029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.504039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.504155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.504165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.504279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.504289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.504479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.504489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.504604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.504614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.504718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.504727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.504955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.504965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.505127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.505137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.505240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.505250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.505503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.505513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.505673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.505683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.505905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.505915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.506026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.506035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.506155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.506165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.506337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.506348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.506458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.506468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.506581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.506591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.506727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.506737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.506965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.506974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.507201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.507211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.507394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.507404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.507525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.507535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.507631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.507641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.507814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.507825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.508000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.508010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.508197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.508207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.508390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.508400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.508506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.508515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.508623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.508633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.508792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.508802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.508983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.508993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.509172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.509182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.509357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.509368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.509546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.509556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.509744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.509753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.509917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.112 [2024-07-15 18:52:02.509927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.112 qpair failed and we were unable to recover it. 00:26:46.112 [2024-07-15 18:52:02.510102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.510113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.510214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.510235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.510400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.510410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.510583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.510593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.510689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.510699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.510805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.510815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.510931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.510941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.511014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.511024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.511191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.511200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.511329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.511340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.511428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.511439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.511610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.511620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.511742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.511752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.511976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.511986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.512093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.512103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.512361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.512370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.512467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.512477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.512641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.512650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.512883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.512893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.513050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.513060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.513241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.513257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.513379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.513389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.513547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.513557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.513651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.513661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.513886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.513896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.514023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.514033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.514195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.514205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.514385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.514398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.514505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.514515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.514697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.514706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.514868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.514877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.515050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.515060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.515154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.515164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.515390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.515400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.515513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.515522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.515681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.515691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.515791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.515801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.113 [2024-07-15 18:52:02.515991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.113 [2024-07-15 18:52:02.516001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.113 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.516109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.516119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.516346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.516356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.516515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.516525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.516705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.516715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.516825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.516835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.516948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.516958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.517143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.517153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.517346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.517356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.517464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.517474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.517705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.517715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.517825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.517834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.518034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.518044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.518298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.518308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.518479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.518489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.518672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.518682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.518860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.518870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.519094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.519105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.519283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.519293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.519567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.519577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.519698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.519708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.519818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.519827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.519928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.519938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.520107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.520117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.520295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.520306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.520477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.520487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.520678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.520687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.520846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.520856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.521019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.521029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.521276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.521286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.521478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.521490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.521714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.521724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.521896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.521905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.522131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.522141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.114 [2024-07-15 18:52:02.522315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.114 [2024-07-15 18:52:02.522325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.114 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.522488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.522498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.522676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.522686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.522790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.522799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.522935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.522945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.523110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.523120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.523232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.523242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.523365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.523375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.523482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.523492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.523615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.523625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.523790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.523800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.523906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.524049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.524059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.524234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.524245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.524407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.524417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.524647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.524656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.524772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.524781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.525007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.525017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.525244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.525254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.525482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.525492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.525607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.525617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.525723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.525733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.525910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.525919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.526071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.526098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.526306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.526328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.526501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.526516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.526616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.526629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.526815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.526831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.527040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.527053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.527234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.527249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.527435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.527450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.527584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.527597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.527784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.527798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.527942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.527956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.528131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.528145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.528349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.528364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.528548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.528562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.528688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.528701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.115 [2024-07-15 18:52:02.528876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.115 [2024-07-15 18:52:02.528889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.115 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.529022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.529037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.529218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.529236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.529501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.529514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.529699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.529713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.529884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.529898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.530071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.530085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.530229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.530243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.530479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.530493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.530677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.530691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.530925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.530939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.531132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.531146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.531350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.531365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.531543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.531553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.531702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.531712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.531889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.531899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.532069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.532078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.532170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.532180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.532360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.532370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.532495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.532505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.532694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.532704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.532934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.532944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.533202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.533211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.533400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.533411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.533494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.533503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.533620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.533630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.533810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.116 [2024-07-15 18:52:02.533820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.116 qpair failed and we were unable to recover it. 00:26:46.116 [2024-07-15 18:52:02.534008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.534018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.534139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.534149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.534400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.534411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.534523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.534533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.534703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.534712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.534818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.534828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.535000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.535010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.535263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.535273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.535474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.535484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.535670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.535680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.535799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.535809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.535966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.535975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.536163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.536178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.536353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.536368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.536550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.536564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.536810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.536824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.537006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.537021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.537200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.537213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.537400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.537414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.537588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.537602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.537780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.537793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.537975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.537989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.538090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.538103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.538389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.538403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.538532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.538546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.538733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.538747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.538986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.538999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.539113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.539127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.539387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.539401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.539569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.539582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.539777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.539791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.539965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.539979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.540237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.540251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.540381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.540394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.540564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.540577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.540785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.540799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.540969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.540983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.541116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.117 [2024-07-15 18:52:02.541130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.117 qpair failed and we were unable to recover it. 00:26:46.117 [2024-07-15 18:52:02.541312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.541327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.541505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.541521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.541637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.541651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.541843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.541857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.541991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.542005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.542177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.542190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.542373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.542387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.542557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.542571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.542681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.542695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.542929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.542943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.543127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.543141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.543375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.543389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.543564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.543578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.543762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.543776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.543902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.543916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.544046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.544060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.544202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.544215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.544399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.544413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.544594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.544608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.544806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.544820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.545007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.545021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.545134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.545148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.545336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.545350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.545483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.545496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.545704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.545717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.545918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.545931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.546044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.546058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.546252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.546266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.546448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.546464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.546698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.546712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.546829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.546842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.546996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.547007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.547181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.547192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.547438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.547448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.547678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.118 [2024-07-15 18:52:02.547688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.118 qpair failed and we were unable to recover it. 00:26:46.118 [2024-07-15 18:52:02.547872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.547882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.548122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.548245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.548255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.548427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.548437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.548684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.548694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.548979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.548989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.549094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.549104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.549218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.549232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.549423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.549433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.549605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.549614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.549788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.549797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.549918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.549928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.550049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.550059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.550162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.550172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.550367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.550377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.550628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.550638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.550829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.550839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.550939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.550949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.551122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.551132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.551387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.551397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.551520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.551532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.551614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.551624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.551725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.551735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.551845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.551855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.552078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.552088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.552265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.552275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.552476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.552485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.552656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.552666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.552829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.552838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.552997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.553007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.553176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.553185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.553295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.553306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.553475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.553485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.553590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.553600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.553763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.119 [2024-07-15 18:52:02.553773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.119 qpair failed and we were unable to recover it. 00:26:46.119 [2024-07-15 18:52:02.553950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.553959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.554139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.554149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.554275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.554285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.554469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.554479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.554589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.554599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.554855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.554865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.554958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.554968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.555073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.555083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.555317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.555327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.555493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.555503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.555661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.555671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.555921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.555931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.556105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.556116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.556293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.556304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.556469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.556478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.556707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.556717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.556831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.556841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.556997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.557007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.557173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.557183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.557377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.557387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.557567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.557577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.557741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.557751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.557846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.557856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.558034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.558043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.558269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.558279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.558391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.558403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.558522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.558532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.558763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.558772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.558886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.558896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.559130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.559139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.559336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.559346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.559416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.559426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.559623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.559633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.559816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.559826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.559938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.559948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.560112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.560122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.560419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.560429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.120 [2024-07-15 18:52:02.560541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-15 18:52:02.560551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.120 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.560673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.560683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.560910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.560920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.561121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.561131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.561257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.561267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.561446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.561455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.561617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.561627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.561792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.561802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.561962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.561972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.562132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.562142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.562345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.562355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.562467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.562477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.562655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.562665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.562761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.562771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.562967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.562976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.563152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.563161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.563325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.563335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.563433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.563443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.563548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.563558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.563664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.563674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.563799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.563809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.563917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.563927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.564924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.564934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.565094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-15 18:52:02.565104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.121 qpair failed and we were unable to recover it. 00:26:46.121 [2024-07-15 18:52:02.565221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.565235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.565346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.565356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.565636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.565645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.565817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.565827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.565939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.565948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.566124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.566134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.566357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.566367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.566465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.566474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.566657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.566667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.566769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.566779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.566891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.566901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.567073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.567084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.567241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.567251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.567398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.567408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.567501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.567511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.567616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.567625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.567789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.567799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.568043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.568053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.568230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.568241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.568414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.568425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.568597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.568607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.568767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.568777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.568952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.568962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.569170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.569180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.569387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.569398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.569624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.569634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.569742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.569752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.569872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.569882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.569998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.570008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.570124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.570134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.570336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.570346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.570597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.570607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.570722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.570732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.570895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.570905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.571013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.571022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.571129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-15 18:52:02.571138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.122 qpair failed and we were unable to recover it. 00:26:46.122 [2024-07-15 18:52:02.571342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.571353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.571547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.571559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.571776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.571786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.571961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.571971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.572149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.572159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.572269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.572279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.572390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.572400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.572505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.572515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.572698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.572708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.572807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.572816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.572923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.572933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.573181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.573191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.573357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.573367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.573618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.573628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.573785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.573795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.573969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.573979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.574141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.574151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.574261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.574271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.574458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.574468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.574558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.574568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.574816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.574826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.575002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.575012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.575249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.575259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.575382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.575392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.575551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.575560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.575829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.575839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.576064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.576074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.576273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.576283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.576525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.576534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.576756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.576766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.576952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.576961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.577074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.577083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.577256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.577266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.123 qpair failed and we were unable to recover it. 00:26:46.123 [2024-07-15 18:52:02.577421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.123 [2024-07-15 18:52:02.577431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.577613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.577623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.577796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.577806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.577983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.577993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.578186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.578196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.578383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.578393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.578626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.578636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.578810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.578819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.578985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.578997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.579264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.579274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.579453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.579463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.579586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.579596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.579767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.579776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.579880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.579890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.580049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.580058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.580306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.580317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.580440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.580450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.580629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.580638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.580754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.580764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.580940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.580950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.581125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.581135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.581299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.581309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.581475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.581485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.581599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.581608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.581784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.581794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.581902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.581912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.582018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.582028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.582195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.582205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.582390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.582400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.582514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.582524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.582684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.582694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.582862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.582872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.582984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.582993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.583176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.583185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.583453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.583463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.583718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.583728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.583905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.583914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.584097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.124 [2024-07-15 18:52:02.584107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.124 qpair failed and we were unable to recover it. 00:26:46.124 [2024-07-15 18:52:02.584214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.584227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.584444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.584454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.584581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.584591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.584765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.584775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.584967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.584977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.585142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.585152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.585261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.585272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.585449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.585459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.585629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.585638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.585892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.585901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.586015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.586026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.586209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.586219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.586394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.586404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.586538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.586548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.586723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.586733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.586861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.586871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.587041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.587051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.587277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.587287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.587437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.587446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.587549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.587559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.587666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.587676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.587864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.587873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.587965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.587975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.588085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.588095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.588189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.588199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.588434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.588444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.588634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.588643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.588873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.588882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.588993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.589002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.589261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.589271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.589448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.589458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.589582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.589592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.589768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.589777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.590010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.590019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.590194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.590204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.590445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.590455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.590552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.590562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.590741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.590751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.125 qpair failed and we were unable to recover it. 00:26:46.125 [2024-07-15 18:52:02.590975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.125 [2024-07-15 18:52:02.590984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.591233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.591243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.591414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.591424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.591648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.591658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.591833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.591842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.591941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.591951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.592178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.592188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.592293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.592303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.592498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.592507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.592613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.592623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.592731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.592740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.592857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.592867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.593130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.593142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.593319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.593329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.593420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.593429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.593611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.593620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.593798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.593807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.593980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.593990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.594182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.594192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.594382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.594393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.594571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.594581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.594876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.594886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.595038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.595048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.595217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.595230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.595395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.595405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.595518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.595528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.595701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.595711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.595937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.595947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.126 [2024-07-15 18:52:02.596126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.126 [2024-07-15 18:52:02.596136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.126 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.596258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.596268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.596429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.596438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.596669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.596679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.596830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.596839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.596944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.596955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.597076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.597086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.597247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.597257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.597493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.597503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.597612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.597622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.597801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.597811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.598036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.598046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.598163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.598173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.598345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.598355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.598607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.598617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.598794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.598804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.598997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.599007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.599168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.599178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.599358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.599368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.599526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.599536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.599714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.599724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.599970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.599980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.600084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.600093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.600274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.600284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.600459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.600470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.600595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.600606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.600779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.600789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.600958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.600967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.601209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.601219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.601424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.601435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.601688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.601698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.601863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.601873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.602071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.602081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.602255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.602265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.602389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.602399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.602579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.602589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.602816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.602826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.127 [2024-07-15 18:52:02.603054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.127 [2024-07-15 18:52:02.603064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.127 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.603235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.603246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.603415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.603424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.603545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.603555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.603731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.603741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.603852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.603862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.603973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.603982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.604141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.604151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.604265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.604276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.604449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.604459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.604626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.604636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.604753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.604762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.604947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.604957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.605129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.605138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.605312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.605323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.605439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.605448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.605560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.605570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.605741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.605751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.605851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.605860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.606002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.606012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.606186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.606195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.606373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.606383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.606556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.606566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.606802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.606811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.606919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.606929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.607088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.607097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.607273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.607283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.607477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.607488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.607586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.607597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.607758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.607769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.607940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.607950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.608111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.608121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.608245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.608256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.608437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.608447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.608627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.608637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.128 qpair failed and we were unable to recover it. 00:26:46.128 [2024-07-15 18:52:02.608869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.128 [2024-07-15 18:52:02.608878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.609062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.609071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.609242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.609252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.609358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.609368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.609564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.609574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.609682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.609691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.609857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.609867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.609984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.609994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.610153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.610163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.610260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.610270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.610394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.610404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.610592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.610602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.610767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.610776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.610934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.610943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.611174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.611184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.611356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.611367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.611443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.611452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.611624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.611634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.611732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.611741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.611996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.612017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.612136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.612151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.612332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.612347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.612453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.612466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.612649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.612663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.612771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.612785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.612948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.612963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.613145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.613159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.613358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.613372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.613575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.613589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.613758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.613771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.613893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.613907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.614029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.614042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.614178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.614196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.614388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.614402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.614634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.614649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.614773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.614787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.614961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.129 [2024-07-15 18:52:02.614974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.129 qpair failed and we were unable to recover it. 00:26:46.129 [2024-07-15 18:52:02.615144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.615158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.615268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.615281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.615468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.615483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.615608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.615622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.615870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.615885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.616117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.616132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.616319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.616333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.616462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.616477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.616648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.616661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.616918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.616932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.617187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.617201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.617398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.617412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.617543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.617557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.617745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.617759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.617881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.617894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.618079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.618093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.618221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.618240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.618374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.618388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.618568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.618582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.618839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.618852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.619123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.619137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.619327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.619341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.619539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.619551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.619785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.619795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.619899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.619910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.620091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.620101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.620278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.620288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.620532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.620542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.620786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.620796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.620906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.620916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.621170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.621180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.621410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.130 [2024-07-15 18:52:02.621420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.130 qpair failed and we were unable to recover it. 00:26:46.130 [2024-07-15 18:52:02.621580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.621590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.621720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.621729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.621836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.621845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.621943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.621954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.622128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.622138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.622243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.622253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.622448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.622457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.622623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.622633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.622736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.622746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.622918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.622928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.623102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.623112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.623283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.623292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.623562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.623572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.623682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.623692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.623807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.623817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.624044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.624054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.624218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.624231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.624458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.624469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.624654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.624664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.624897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.624907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.625083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.625092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.625262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.625272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.625382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.625593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.625602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.625855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.625865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.625973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.625982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.626160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.626170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.626334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.626345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.626572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.626582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.626686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.626695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.626894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.626904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.627132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.627142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.627339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.627349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.627531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.627541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.131 [2024-07-15 18:52:02.627767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.131 [2024-07-15 18:52:02.627776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.131 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.627884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.627894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.627962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.627971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.628221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.628234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.628406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.628416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.628574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.628584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.628765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.628775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.628894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.628903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.629051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.629061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.629183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.629195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.629443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.629453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.629706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.629716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.629887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.629897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.630092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.630102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.630220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.630237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.630413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.630422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.630605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.630615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.630783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.630793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.630953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.630963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.631086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.631095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.631255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.631270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.631379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.631388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.631647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.631657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.631754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.631764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.631928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.631937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.632095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.632104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.632355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.632366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.632526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.632536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.632694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.632704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.632873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.632883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.633109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.633118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.633304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.633314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.633496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.633506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.633613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.633623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.633738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.633748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.633844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.132 [2024-07-15 18:52:02.633854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.132 qpair failed and we were unable to recover it. 00:26:46.132 [2024-07-15 18:52:02.634029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.634045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.634280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.634295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.634370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.634383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.634560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.634573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.634742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.634756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.634930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.634944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.635125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.635138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.635321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.635335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.635541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.635555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.635734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.635748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.635867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.635880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.636062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.636076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.636188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.636201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.636376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.636390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.636595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.636610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.636811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.636824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.637006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.637020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.637147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.637160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.637352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.637367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.637542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.637556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.637741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.637754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.637939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.637952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.638148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.638161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.638412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.638427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.638542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.638556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.638724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.638738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.638918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.638932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.639062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.639077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.639242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.639450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.639465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.639578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.639591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.639762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.639775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.639959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.639973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.640090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.640104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.640330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.640344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.640548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.640561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.640676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.640690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.133 qpair failed and we were unable to recover it. 00:26:46.133 [2024-07-15 18:52:02.640865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.133 [2024-07-15 18:52:02.640880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.641064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.641077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.641223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.641241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.641423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.641439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.641697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.641711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.641924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.642047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.642060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.642190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.642205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.642448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.642462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.642699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.642713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.642893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.642907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.642995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.643008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.643204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.643218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.643479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.643493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.643608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.643621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.643788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.643802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.643903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.643916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.644110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.644125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.644304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.644318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.644422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.644436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.644544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.644558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.644672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.644686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.644917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.644931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.645109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.645123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.645292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.645306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.645423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.645438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.645695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.645709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.645890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.645904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.646101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.646116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.646264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.646277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.646391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.646405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.646585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.646598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.646794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.646808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.646989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.647002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.647195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.647209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.647392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.647407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.647593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.134 [2024-07-15 18:52:02.647607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.134 qpair failed and we were unable to recover it. 00:26:46.134 [2024-07-15 18:52:02.647805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.647818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.647935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.647948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.648084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.648097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.648339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.648353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.648459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.648473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.648680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.648694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.648821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.648837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.648927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.648941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.649174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.649188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.649361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.649375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.649556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.649570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.649809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.649822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.650063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.650077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.650255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.650269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.650389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.650402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.650621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.650635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.650895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.650908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.651118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.651132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.651262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.651277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.651477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.651491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.651623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.651637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.651824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.651839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.652012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.652025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.652143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.652157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.652271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.652292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.652482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.652496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.652737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.652751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.652953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.652967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.653148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.653162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.135 [2024-07-15 18:52:02.653418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.135 [2024-07-15 18:52:02.653432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.135 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.653616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.653630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.653827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.653841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.654145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.654158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.654282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.654296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.654411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.654424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.654599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.654613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.654845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.654858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.655040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.655053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.655162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.655177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.655296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.655311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.655505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.655519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.655688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.655702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.655982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.655995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.656173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.656187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.656374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.656388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.656567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.656581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.656710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.656726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.656850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.656864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.657100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.657113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.657242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.657257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.657405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.657418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.657601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.657616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.657849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.657863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.657985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.657999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.658116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.658129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.658254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.658269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.658479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.658493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.658700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.658714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.658831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.658844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.659019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.659033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.659239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.659254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.659379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.659393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.659523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.659536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.659734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.659748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.659872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.136 [2024-07-15 18:52:02.659886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.136 qpair failed and we were unable to recover it. 00:26:46.136 [2024-07-15 18:52:02.660053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.660067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.660249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.660263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.660448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.660462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.660632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.660645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.660774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.660788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.660894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.660907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.661039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.661053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.661154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.661168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.661294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.661309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.661413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.661426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.661595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.661609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.661842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.661856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.661972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.661986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.662093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.662106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.662367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.662381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.662632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.662645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.662809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.662823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.663030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.663043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.663278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.663292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.663468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.663482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.663602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.663616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.663798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.663814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.663936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.663950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.664236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.664250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.664374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.664400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.664615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.664629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.664825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.664838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.665073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.665087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.665293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.665308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.665478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.665491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.665645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.665659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.665908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.665921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.666049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.666062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.666269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.666283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.666496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.666510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.666773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.666787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.137 [2024-07-15 18:52:02.666915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.137 [2024-07-15 18:52:02.666929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.137 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.667054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.667067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.667175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.667189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.667325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.667338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.667516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.667529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.667698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.667712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.667866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.667880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.668113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.668127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.668248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.668262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.668442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.668456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.668574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.668588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.668848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.668862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.668984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.668998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.669132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.669146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.669328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.669343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.669541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.669555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.669726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.669741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.669908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.669922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.670042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.670056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.670158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.670172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.670405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.670419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.670607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.670621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.670801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.670815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.670917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.670931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.671102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.671115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.671237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.671256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.671360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.671373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.671561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.671574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.671836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.671850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.671965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.671979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.672163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.672178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.672350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.672363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.672578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.672592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.672761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.672774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.673032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.673046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.673168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.673182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.673365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.673379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.138 [2024-07-15 18:52:02.673454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.138 [2024-07-15 18:52:02.673467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.138 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.673663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.673677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.673852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.673866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.674035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.674049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.674169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.674182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.674443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.674458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.674611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.674625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.674809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.674823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.675003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.675017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.675139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.675152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.675264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.675280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.675451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.675464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.675643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.675657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.675757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.675771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.675952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.675965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.676148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.676163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.676286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.676300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.676514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.676528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.676696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.676711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.676881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.676895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.677093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.677107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.677275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.677289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.677463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.677477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.677673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.677687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.677802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.677815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.678000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.678014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.678196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.678210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.678344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.678357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.678617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.678635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.678821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.678834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.679049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.679063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.679238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.679252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.679372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.679385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.679563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.679576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.679766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.679780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.679854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.679867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.679994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.680008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.680126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.680139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.680249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.680264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.139 qpair failed and we were unable to recover it. 00:26:46.139 [2024-07-15 18:52:02.680456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.139 [2024-07-15 18:52:02.680470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.680592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.680605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.680726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.680740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.680864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.680878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.680991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.681005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.681188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.681201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.681390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.681404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.681527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.681542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.681725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.681739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.681934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.681948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.682079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.682093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.682280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.682295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.682499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.682513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.682641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.682654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.682837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.682852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.682979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.682993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.683100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.683113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.683295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.683310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.683411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.683424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.683540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.683554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.683676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.683689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.683936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.683949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.684068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.684082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.684328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.684342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.684517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.684530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.684714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.684727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.684907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.684921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.140 [2024-07-15 18:52:02.685168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.140 [2024-07-15 18:52:02.685182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.140 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.685309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.685323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.685516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.685532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.685651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.685665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.685901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.685915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.686113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.686127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.686319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.686333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.686506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.686520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.686634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.686648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.686832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.686847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.687025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.687039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.687123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.687137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.687258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.687272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.687551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.687565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.687747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.687761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.687891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.687905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.688090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.688104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.688275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.688289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.688564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.688578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.688754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.688767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.688897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.688911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.689007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.689020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.689128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.689142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.689264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.689277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.689363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.689377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.689552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.689566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.689689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.689703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.689886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.689900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.690093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.690106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.690296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.690319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.690421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.690433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.690549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.690559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.690664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.690674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.690860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.690870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.690964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.690974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.691132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.691142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.691333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.691343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.691586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.691595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.141 qpair failed and we were unable to recover it. 00:26:46.141 [2024-07-15 18:52:02.691864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.141 [2024-07-15 18:52:02.691874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.691996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.692006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.692117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.692127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.692304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.692315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.692438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.692450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.692634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.692644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.692805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.692814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.692938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.692948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.693125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.693135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.693323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.693333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.693432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.693442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.693697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.693707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.693821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.693832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.694034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.694044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.694156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.694166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.694443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.694453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.694548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.694557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.694681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.694690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.694872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.694882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.694993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.695003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.695176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.695186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.695357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.695367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.695463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.695473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.695740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.695750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.696021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.696031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.696187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.696197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.696394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.696404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.696516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.696526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.696779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.696789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.696899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.696909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.697030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.697040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.697152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.697162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.697291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.697301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.697477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.697487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.142 qpair failed and we were unable to recover it. 00:26:46.142 [2024-07-15 18:52:02.697650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.142 [2024-07-15 18:52:02.697660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.697839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.697848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.698029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.698039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.698166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.698176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.698406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.698416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.698578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.698588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.698754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.698764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.698928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.698938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.699117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.699127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.699309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.699319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.699438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.699449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.699562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.699572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.699677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.699687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.699880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.699890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.700031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.700041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.700281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.700292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.700502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.700512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.700787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.700796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.700981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.700991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.701161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.701171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.701282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.701293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.701401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.701411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.701638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.701648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.701810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.701820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.701935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.701945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.702046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.702056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.702254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.702265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.702428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.702438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.702598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.702607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.702714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.702724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.702900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.702910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.703022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.703031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.703153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.703162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.703357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.703367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.703448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.703457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.703732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.703741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.703904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.703914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.704036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.704046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.143 qpair failed and we were unable to recover it. 00:26:46.143 [2024-07-15 18:52:02.704156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.143 [2024-07-15 18:52:02.704166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.704326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.704336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.704527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.704536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.704653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.704663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.704842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.704852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.705084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.705094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.705254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.705264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.705381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.705391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.705564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.705574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.705736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.705746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.705909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.705919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.706022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.706031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.706202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.706214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.706394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.706404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.706582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.706592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.706702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.706711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.706894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.706903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.706994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.707004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.707185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.707195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.707364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.707374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.707493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.707503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.707610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.707620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.707735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.707745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.707840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.707849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.708075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.708085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.708253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.708263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.708381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.708391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.708620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.708630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.708812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.708822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.708981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.708991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.709106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.709115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.709220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.709233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.709339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.709350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.709457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.709466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.709648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.709658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.709761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.709771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.709939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.709949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.144 qpair failed and we were unable to recover it. 00:26:46.144 [2024-07-15 18:52:02.710072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.144 [2024-07-15 18:52:02.710081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.710257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.710267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.710372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.710389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.710624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.710638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.710821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.710835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.710967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.710981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.711162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.711176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.711361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.711375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.711501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.711515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.711641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.711655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.711775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.711789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.711976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.711990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.712117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.712131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.712318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.712332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.712451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.712465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.712702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.712719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.712986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.713001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.713183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.713197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.713462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.713477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.713661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.713675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.713855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.713869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.714114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.714129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.714262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.714276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.714459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.714472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.714649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.714664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.714826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.714839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.714989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.715003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.715123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.145 [2024-07-15 18:52:02.715137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.145 qpair failed and we were unable to recover it. 00:26:46.145 [2024-07-15 18:52:02.715322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.715336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.715447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.715461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.715695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.715708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.715883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.715897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.716088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.716101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.716338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.716352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.716558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.716571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.716783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.716796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.716997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.717011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.717130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.717144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.717260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.717274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.717444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.717458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.717658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.717671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.717861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.717874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.718110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.718121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.718315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.718325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.718405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.718415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.718536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.718546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.718692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.718702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.718868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.718877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.719054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.719064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.719314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.719324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.719494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.719503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.719680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.719690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.719818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.719828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.719934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.719944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.720073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.720083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.720274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.720284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.720528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.720538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.720650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.720767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.720777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.720885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.720895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.721121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.721131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.721293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.721303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.721475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.721485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.721585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.721595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.721739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.721749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.721922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.146 [2024-07-15 18:52:02.721932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.146 qpair failed and we were unable to recover it. 00:26:46.146 [2024-07-15 18:52:02.722050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.722059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.722173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.722183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.722366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.722377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.722484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.722494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.722722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.722732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.722845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.722855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.723059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.723069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.723241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.723251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.723373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.723383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.723542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.723552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.723634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.723643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.723750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.723760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.723926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.723935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.724080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.724090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.724259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.724269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.724430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.724440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.724631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.724643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.724747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.724757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.724863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.724873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.725031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.725041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.725202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.725212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.725391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.725401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.725569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.725580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.725708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.725718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.725909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.725918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.726172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.726182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.726369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.726379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.726546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.726556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.726709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.726719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.726913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.726922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.727141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.727151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.727323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.727333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.727493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.727503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.727687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.727697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.727977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.147 [2024-07-15 18:52:02.727987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.147 qpair failed and we were unable to recover it. 00:26:46.147 [2024-07-15 18:52:02.728166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.728176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.728357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.728366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.728489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.728499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.728726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.728735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.728898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.728908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.729101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.729110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.729216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.729228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.729400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.729410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.729523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.729533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.729644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.729654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.729845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.729855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.730025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.730035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.730197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.730207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.730438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.730447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.730723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.730733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.730852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.730862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.730977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.730987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.731093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.731103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.731244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.731254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.731359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.731369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.731532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.731542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.731713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.731724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.731835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.731845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.731961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.731970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.732043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.732052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.732236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.732246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.732410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.732420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.732613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.732623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.732719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.732729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.732917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.732926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.733103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.733113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.733338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.733348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.733454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.733464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.733569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.733578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.733684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.733694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.733870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.733880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.734107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.734116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.734223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.734238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.734511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.734521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.734695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.148 [2024-07-15 18:52:02.734705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.148 qpair failed and we were unable to recover it. 00:26:46.148 [2024-07-15 18:52:02.734815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.734825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.734944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.734955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.735144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.735154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.735263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.735273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.735465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.735475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.735599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.735609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.735725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.735735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.735961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.735971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.736216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.736229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.736403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.736413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.736576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.736585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.736707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.736716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.736821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.736830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.736999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.737169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.737285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.737400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.737521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.737722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.737856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.737968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.737978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.738089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.738101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.738329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.738339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.738495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.738505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.738613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.738623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.738724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.738734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.738965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.738975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.739094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.739104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.739272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.739282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.739385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.739395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.739642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.739652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.739757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.739767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.739862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.739872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.740124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.740133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.740311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.740321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.740501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.740511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.740738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.740748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.740946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.740956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.149 [2024-07-15 18:52:02.741191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.149 [2024-07-15 18:52:02.741201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.149 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.741269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.741279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.741457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.741467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.741662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.741671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.741777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.741786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.741895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.741905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.742072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.742082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.742255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.742265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.742384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.742394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.742562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.742572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.742677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.742687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.742864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.742874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.743031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.743041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.743238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.743248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.743493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.743503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.743741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.743751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.743846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.743856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.743976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.743985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.744143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.744153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.744333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.744343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.744513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.744523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.744629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.744640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.744755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.150 [2024-07-15 18:52:02.744764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.150 qpair failed and we were unable to recover it. 00:26:46.150 [2024-07-15 18:52:02.744877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.744889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.745048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.745058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.745181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.745299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.745310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.745476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.745486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.745712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.745722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.745977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.745987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.746112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.746122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.746232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.746242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.746347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.746357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.746454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.746464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.746624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.746634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.746822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.746831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.747061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.747071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.747198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.747208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.747494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.747504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.747604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.747614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.747762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.747772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.747946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.747956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.748138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.748148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.748320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.748331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.748423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.748433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.748534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.748544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.748704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.748715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.748994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.749004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.749183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.749193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.749299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.749310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.749476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.749487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.749660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.749670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.749839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.749849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.750027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.750037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.750143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.750153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.750262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.750273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.750443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.750453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.750614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.750624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.750784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.750794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.750896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.750906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.751075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.751085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.751195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.751205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.751307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.751318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.151 qpair failed and we were unable to recover it. 00:26:46.151 [2024-07-15 18:52:02.751497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.151 [2024-07-15 18:52:02.751510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.751686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.751696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.751803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.751813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.751993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.752003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.752165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.752174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.752289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.752300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.752402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.752412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.752592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.752602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.752783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.752792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.752963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.752972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.753172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.753182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.753284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.753293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.753479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.753489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.753666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.753676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.753820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.753829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.753946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.753956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.754128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.754138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.754327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.754337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.754562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.754572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.754676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.754685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.754753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.754763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.754988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.754998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.755261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.755271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.755433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.755443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.755533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.755542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.755703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.755713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.755985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.755995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.756160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.756171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.756439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.756449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.756640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.756650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.756823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.756832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.756938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.756947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.757108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.757118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.757232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.757243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.757506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.757515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.757692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.757701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.757925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.152 [2024-07-15 18:52:02.757935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 18:52:02.758048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.758058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.758179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.758189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.758294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.758304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.758468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.758479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.758571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.758581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.758719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.758729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.758932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.758942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.759101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.759110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.759279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.759289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.759544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.759554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.759665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.759675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.759902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.759911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.760014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.760024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.760186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.760196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.760338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.760349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.760598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.760608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.760837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.760847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.761048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.761058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.761157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.761167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.761343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.761353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.761525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.761535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.761655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.761664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.761826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.761836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.761922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.761932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.762043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.762053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.762174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.762184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.762437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.762447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.762672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.762682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.762866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.762876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.763100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.763109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.763282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.763293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.763492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.763502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.763755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.763765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.763936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.763946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.764105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.764114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.764235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.764245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.764421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.764431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.764591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.764600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.764707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.153 [2024-07-15 18:52:02.764717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 18:52:02.764834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.764844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.765096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.765105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.765276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.765286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.765515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.765525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.765624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.765635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.765888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.765898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.766131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.766140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.766299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.766309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.766411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.766421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.766525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.766535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.766699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.766708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.766948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.766958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.767065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.767074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.767249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.767259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.767387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.767397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.767555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.767565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.767745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.767755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.767948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.767958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.768069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.768079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.768190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.768199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.768475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.768486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.768664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.768674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.768787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.768797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.768927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.768937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.769041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.769051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.769214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.769228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.769389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.769398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.769521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.769531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.769645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.769655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.769768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.769779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.769884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.769893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.770076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.770086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.154 [2024-07-15 18:52:02.770186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.154 [2024-07-15 18:52:02.770196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.154 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.770315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.770325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.770439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.770449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.770632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.770642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.770805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.770815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.771009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.771019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.771155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.771164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.771355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.771365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.771477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.771488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.771671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.771681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.771856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.771866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.772095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.772104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.772198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.772209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.772401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.772412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.772682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.772692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.772860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.772870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.773048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.773058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.773250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.773260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.773369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.773379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.773634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.773644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.773761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.773771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.773930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.773940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.774165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.774174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.774281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.774292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.774425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.774435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.774547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.774557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.774736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.774746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.774905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.774915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.775030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.775040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.775159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.775169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.775334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.775344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.775518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.775528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.775710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.775720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.775864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.775873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.776059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.776069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.776300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.776311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.776495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.776505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.776687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.776697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.776863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.776872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.777049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.777059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.777166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.155 [2024-07-15 18:52:02.777175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.155 qpair failed and we were unable to recover it. 00:26:46.155 [2024-07-15 18:52:02.777252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.777262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.777420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.777430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.777622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.777633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.777740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.777750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.777867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.777877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.778050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.778060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.778246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.778256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.778511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.778521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.778697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.778707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.778898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.778908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.779018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.779028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.779145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.779156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.779262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.779272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.779376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.779385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.779553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.779563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.779671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.779681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.779923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.779933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.156 [2024-07-15 18:52:02.780114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.156 [2024-07-15 18:52:02.780123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.156 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.780305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.780316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.780416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.780427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.780668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.780679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.780778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.780788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.780956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.780966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.781086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.781095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.781269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.781279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.781394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.781404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.781543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.781553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.781786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.781796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.781970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.781980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.782147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.782157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.782344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.782355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.461 qpair failed and we were unable to recover it. 00:26:46.461 [2024-07-15 18:52:02.782515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.461 [2024-07-15 18:52:02.782525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.782699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.782709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.782964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.782975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.783160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.783171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.783362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.783372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.783489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.783499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.783658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.783669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.783780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.783790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.783913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.783923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.784156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.784165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.784347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.784359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.784538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.784549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.784709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.784719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.784895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.784905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.785028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.785038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.785218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.785234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.785355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.785364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.785539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.785549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.785645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.785656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.785752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.785761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.785952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.785968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.786080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.786091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.786321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.786332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.786428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.786438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.786616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.786626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.786796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.786806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.786993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.787003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.787280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.787290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.787384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.787395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.787559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.787569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.787699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.787709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.787883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.787893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.788069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.788079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.788193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.788203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.788322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.788332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.788438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.788448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.462 qpair failed and we were unable to recover it. 00:26:46.462 [2024-07-15 18:52:02.788673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.462 [2024-07-15 18:52:02.788683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.788793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.788803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.788965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.788975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.789088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.789098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.789296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.789306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.789403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.789413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.789583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.789593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.789717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.789727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.789890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.789900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.789999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.790010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.790172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.790182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.790402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.790413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.790482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.790491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.790673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.790683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.790931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.790941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.791141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.791150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.791259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.791269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.791493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.791503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.791616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.791626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.791800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.791810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.791970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.791981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.792106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.792115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.792372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.792382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.792498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.792508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.792736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.792747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.792908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.792918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.793124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.793134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.793315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.793325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.793554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.793564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.793788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.793798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.793908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.793918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.794087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.794097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.794281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.794292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.794393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.794403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.794516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.794527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.463 qpair failed and we were unable to recover it. 00:26:46.463 [2024-07-15 18:52:02.794775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.463 [2024-07-15 18:52:02.794785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.794945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.794955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.795025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.795034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.795191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.795201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.795326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.795337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.795434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.795445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.795669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.795679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.795851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.795861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.795982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.795993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.796174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.796184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.796420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.796430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.796545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.796556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.796721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.796731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.796836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.796846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.797006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.797016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.797108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.797118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.797304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.797315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.797399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.797408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.797634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.797643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.797773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.797783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.797946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.797956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.798131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.798141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.798389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.798399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.798525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.798535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.798703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.798713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.798822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.798832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.798949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.798958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.799120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.799130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.799281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.799291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.799401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.799413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.799612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.799622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.799731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.799740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.799899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.799909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.800080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.800090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.800268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.800279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.800449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.800459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.464 [2024-07-15 18:52:02.800568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.464 [2024-07-15 18:52:02.800578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.464 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.800743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.800752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.800930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.800940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.801048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.801057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.801317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.801327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.801434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.801444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.801603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.801613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.801809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.801819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.801931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.801941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.802105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.802115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.802231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.802241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.802347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.802357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.802543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.802553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.802730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.802740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.802860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.802870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.803068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.803078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.803308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.803318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.803493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.803503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.803630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.803640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.803902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.803912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.804148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.804158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.804337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.804347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.804516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.804526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.804607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.804617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.804679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.804689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.804854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.804863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.804976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.804986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.805101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.805111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.805228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.805239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.805423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.805433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.805615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.805625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.805724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.805734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.805960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.805970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.806094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.806104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.806198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.465 [2024-07-15 18:52:02.806209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.465 qpair failed and we were unable to recover it. 00:26:46.465 [2024-07-15 18:52:02.806384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.806394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.806489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.806499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.806676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.806686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.806781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.806790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.806926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.806936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.807105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.807115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.807292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.807302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.807545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.807555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.807658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.807669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.807921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.807931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.808104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.808113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.808235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.808245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.808358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.808368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.808548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.808558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.808720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.808730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.808898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.808908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.808998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.809008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.809178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.809188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.809268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.809278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.809450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.809460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.809689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.809699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.809819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.809829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.809987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.809997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.810165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.466 [2024-07-15 18:52:02.810175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.466 qpair failed and we were unable to recover it. 00:26:46.466 [2024-07-15 18:52:02.810289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.810299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.810427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.810439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.810598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.810608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.810788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.810798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.810977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.810987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.811158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.811168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.811255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.811265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.811440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.811450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.811531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.811540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.811768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.811778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.811924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.811934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.812158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.812168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.812349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.812360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.812532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.812542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.812712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.812722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.812882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.812892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.813076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.813086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.813317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.813327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.813420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.813430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.813533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.813542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.813644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.813654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.813735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.813745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.813852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.813863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.814009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.814018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.814140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.814149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.814259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.814269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.814435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.814445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.814558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.814568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.814799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.814808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.814991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.815001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.815114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.815124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.815235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.815245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.815351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.815361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.815538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.815547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.815679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.815689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.467 [2024-07-15 18:52:02.815852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.467 [2024-07-15 18:52:02.815862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.467 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.816042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.816052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.816242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.816252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.816416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.816426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.816536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.816546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.816707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.816717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.816841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.816853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.816945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.816956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.817130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.817140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.817302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.817313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.817474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.817484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.817614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.817624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.817792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.817802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.818046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.818056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.818217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.818230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.818408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.818418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.818589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.818599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.818692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.818702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.818971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.818981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.819154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.819163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.819329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.819339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.819526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.819536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.819711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.819721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.819823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.819832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.819956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.819966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.820092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.820102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.820216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.820236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.820339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.820350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.820460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.820470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.820589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.820600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.820709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.820718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.820823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.820833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.821003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.821013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.821200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.821210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.821407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.821417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.821580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.821590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.821762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.821772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.468 [2024-07-15 18:52:02.821948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.468 [2024-07-15 18:52:02.821958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.468 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.822065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.822075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.822258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.822268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.822438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.822448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.822682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.822692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.822803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.822813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.822990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.823000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.823164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.823174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.823341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.823352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.823525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.823536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.823630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.823639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.823728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.823738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.823870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.823880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.824007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.824017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.824124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.824133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.824397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.824407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.824569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.824579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.824815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.824824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.824934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.824944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.825125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.825134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.825318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.825329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.825500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.825510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.825676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.825686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.825858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.825868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.826041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.826051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.826222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.826238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.826402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.826412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.826650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.826660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.826790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.826799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.826935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.826945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.827049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.827059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.827314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.827324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.827426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.827436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.827615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.827625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.827859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.827869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.827967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.827977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.469 [2024-07-15 18:52:02.828091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-07-15 18:52:02.828100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.469 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.828204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.828214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.828388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.828399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.828511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.828521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.828629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.828639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.828885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.828895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.829102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.829112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.829369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.829379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.829612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.829622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.829850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.829860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.830019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.830029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.830153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.830163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.830337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.830348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.830597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.830609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.830769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.830779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.831006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.831016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.831080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.831090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.831210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.831220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.831394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.831404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.831578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.831588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.831713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.831723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.831950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.831960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.832192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.832202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.832388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.832398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.832579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.832589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.832685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.832696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.832858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.832869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.833039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.833050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.833302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.833313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.833560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.833570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.833674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.833684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.833792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.833802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.833970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.833980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.834146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.834156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.834359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.834369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.834531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.834540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.470 qpair failed and we were unable to recover it. 00:26:46.470 [2024-07-15 18:52:02.834652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-07-15 18:52:02.834662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.834826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.834836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.835022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.835031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.835292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.835302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.835555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.835565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.835805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.835815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.835985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.835995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.836172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.836182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.836353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.836363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.836617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.836627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.836746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.836756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.836933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.836943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.837123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.837133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.837375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.837386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.837545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.837554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.837680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.837689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.837867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.837877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.838189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.838201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.838373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.838384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.838633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.838643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.838869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.838879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.839079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.839090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.839269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.839279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.839527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.839537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.839785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.839795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.471 [2024-07-15 18:52:02.840069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-07-15 18:52:02.840079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.471 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.840187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.840197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.840324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.840334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.840508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.840517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.840677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.840687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.840941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.840951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.841188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.841198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.841374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.841385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.841560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.841570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.841847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.841857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.842104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.842113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.842329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.842340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.842558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.842568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.842818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.842828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.843005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.843015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.843220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.843234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.843481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.843491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.843607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.843617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.843722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.843732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.843932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.843942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.844069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.844079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.844342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.844353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.844468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.844479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.844581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.844591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.844792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.844802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.844964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.844974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.845171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.845182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.845304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.845316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.845526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.845536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.845696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.845706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.845865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.845876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.846043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.846053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.846162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.846176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.846401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.846411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.472 [2024-07-15 18:52:02.846605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.472 [2024-07-15 18:52:02.846615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.472 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.846726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.846737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.846931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.846941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.847123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.847133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.847369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.847379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.847561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.847571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.847742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.847752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.847939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.847949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.848215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.848229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.848490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.848500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.848729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.848739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.849058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.849069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.849322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.849332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.849429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.849441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.849678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.849688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.849872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.849882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.850041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.850051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.850231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.850241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.850421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.850431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.850701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.850710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.850943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.850953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.851144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.851154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.851317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.851327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.851562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.851571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.851805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.851814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.851996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.852006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.852167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.852177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.852446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.852458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.852570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.852581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.852693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.852703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.852872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.852881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.853107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.853117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.853337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.853348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.853529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.853539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.853759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.853768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.854014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.854024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.854230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.473 [2024-07-15 18:52:02.854240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.473 qpair failed and we were unable to recover it. 00:26:46.473 [2024-07-15 18:52:02.854465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.854475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.854741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.854752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.855006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.855016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.855267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.855277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.855523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.855533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.855802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.855812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.855972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.855982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.856236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.856247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.856494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.856503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.856662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.856671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.856918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.856928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.857154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.857164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.857402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.857413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.857659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.857669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.857840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.857850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.858082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.858093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.858357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.858368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.858481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.858492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.858740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.858750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.858976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.858986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.859262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.859272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.859522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.859532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.859773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.859783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.860029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.860038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.860233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.860243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.860468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.860479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.860730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.860740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.861012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.861022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.861304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.861335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.861529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.861543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.861777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.861792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.861889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.861904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.862082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.862097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.862342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.862358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.862562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.862576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.862733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.862747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.862928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.474 [2024-07-15 18:52:02.862941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.474 qpair failed and we were unable to recover it. 00:26:46.474 [2024-07-15 18:52:02.863233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.863248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.863439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.863452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.863641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.863655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.863912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.863927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.864126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.864145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.864323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.864345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.864604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.864619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.864803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.864817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.865113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.865129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.865372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.865386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.865621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.865635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.865899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.865913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.866015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.866028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.866201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.866215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.866471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.866485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.866613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.866627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.866896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.866910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.867091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.867105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.867361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.867376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.867557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.867572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.867754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.867767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.867947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.867960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.868197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.868211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.868394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.868408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.868641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.868652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.868879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.868890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.869009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.869019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.869190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.869200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.869364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.869375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.869563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.869573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.869827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.869837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.870018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.870034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.870293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.870308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.870491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.870505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.870766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.475 [2024-07-15 18:52:02.870780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.475 qpair failed and we were unable to recover it. 00:26:46.475 [2024-07-15 18:52:02.870986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.870999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.871245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.871258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.871512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.871526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.871697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.871711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.871969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.871983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.872164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.872178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.872292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.872314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.872446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.872460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.872663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.872675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.872846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.872858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.872982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.872992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.873176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.873186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.873365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.873375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.873631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.873641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.873766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.873776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.874016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.874026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.874201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.874211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.874326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.874337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.874447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.874457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.874710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.874720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.874833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.874844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.875021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.875125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.875135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.875254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.875265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.875368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.875378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.875547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.875557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.875652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.875662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.875841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.875851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.876020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.876030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.876200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.876210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.876390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.876401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.476 [2024-07-15 18:52:02.876519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.476 [2024-07-15 18:52:02.876528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.476 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.876651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.876662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.876839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.876850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.877015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.877025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.877202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.877213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.877389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.877400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.877573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.877584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.877814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.877825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.877949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.877959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.878120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.878130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.878240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.878250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.878346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.878356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.878518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.878528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.878729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.878739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.878920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.878930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.879112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.879122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.879287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.879298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.879477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.879487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.879606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.879618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.879732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.879742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.879901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.879911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.880029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.880039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.880211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.880221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.880419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.880429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.880662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.880672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.880952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.880962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.881236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.881246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.881427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.881437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.881619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.881629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.881818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.881828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.881956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.881967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.882154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.882163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.882289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.882299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.882423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.882432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.882541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.882551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.882720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.882731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.477 qpair failed and we were unable to recover it. 00:26:46.477 [2024-07-15 18:52:02.882905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.477 [2024-07-15 18:52:02.882915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.883151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.883161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.883325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.883335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.883454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.883464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.883632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.883642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.883798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.883808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.883924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.883934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.884093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.884103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.884333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.884343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.884532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.884543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.884706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.884716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.884836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.884845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.885029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.885039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.885210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.885219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.885317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.885328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.885553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.885563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.885735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.885745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.885859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.885869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.885977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.885988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.886220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.886240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.886418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.886428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.886521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.886531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.886665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.886678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.886781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.886790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.886991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.887098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.887233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.887341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.887516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.887641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.887880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.887981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.887991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.888165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.888175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.888267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.888277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.888391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.888401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.888559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.888569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.888698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.888709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.888812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.888822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.889001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.889011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.478 [2024-07-15 18:52:02.889175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.478 [2024-07-15 18:52:02.889185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.478 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.889356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.889366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.889459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.889469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.889600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.889610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.889839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.889849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.889954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.889964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.890070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.890080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.890274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.890285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.890392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.890402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.890563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.890573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.890749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.890759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.890855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.890865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.891115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.891124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.891301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.891311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.891408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.891418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.891524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.891535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.891649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.891659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.891769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.891779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.892004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.892014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.892139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.892149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.892333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.892344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.892447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.892457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.892700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.892710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.892817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.892829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.893010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.893020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.893136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.893145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.893268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.893278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.893444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.893454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.893623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.893633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.893741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.893751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.893849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.893859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.894037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.894046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.894142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.894152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.894318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.894329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.479 qpair failed and we were unable to recover it. 00:26:46.479 [2024-07-15 18:52:02.894491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.479 [2024-07-15 18:52:02.894500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.894617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.894627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.894791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.894801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.894911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.894921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.895088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.895098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.895265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.895276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.895387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.895397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.895560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.895570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.895678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.895688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.895856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.895866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.895987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.895996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.896107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.896117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.896220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.896234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.896337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.896348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.896576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.896586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.896709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.896719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.896902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.896913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.897016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.897025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.897185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.897195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.897374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.897384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.897581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.897591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.897820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.897830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.897950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.897961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.898058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.898068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.898177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.898187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.898350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.898360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.898534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.898544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.898702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.898712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.898821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.898831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.899060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.899073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.899301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.899312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.899470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.899481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.899659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.899669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.899848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.899858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.899966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.899976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.900135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.900145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.900380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.480 [2024-07-15 18:52:02.900391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.480 qpair failed and we were unable to recover it. 00:26:46.480 [2024-07-15 18:52:02.900554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.900731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.900741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.900852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.900862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.901042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.901052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.901261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.901272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.901446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.901456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.901565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.901575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.901669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.901679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.901855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.901865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.901977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.901986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.902085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.902095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.902271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.902280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.902391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.902401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.902526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.902536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.902647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.902657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.902766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.902776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.903006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.903016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.903131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.903141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.903242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.903252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.903426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.903437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.903607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.903617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.903781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.903791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.903894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.903904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.904133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.904142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.904262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.904272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.904385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.904395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.904621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.904631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.904746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.904757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.904937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.904947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.905137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.481 [2024-07-15 18:52:02.905147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.481 qpair failed and we were unable to recover it. 00:26:46.481 [2024-07-15 18:52:02.905265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.905275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.905450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.905460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.905540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.905550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.905644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.905654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.905885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.905895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.905997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.906008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.906260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.906271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.906432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.906442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.906602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.906613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.906786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.906796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.906903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.906913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.907143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.907153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.907326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.907337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.907462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.907472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.907721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.907731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.907895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.907906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.908136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.908146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.908247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.908259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.908434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.908444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.908677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.908687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.908837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.908847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.909022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.909032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.909144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.909154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.909248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.909258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.909520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.909530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.909789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.909799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.909961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.909971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.910132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.910142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.910344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.910355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.910532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.910545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.910705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.910715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.910937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.910948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.911112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.911122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.911347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.911358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.911453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.911464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.482 [2024-07-15 18:52:02.911576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.482 [2024-07-15 18:52:02.911586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.482 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.911755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.911766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.911933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.911943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.912120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.912131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.912297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.912308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.912500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.912510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.912693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.912703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.912879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.912889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.913003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.913014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.913137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.913148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.913327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.913338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.913583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.913593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.913764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.913774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.913870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.913880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.913996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.914006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.914210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.914220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.914416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.914426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.914601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.914610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.914783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.914794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.915025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.915036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.915141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.915151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.915319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.915330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.915502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.915513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.915688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.915698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.915879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.915889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.916002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.916012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.916181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.916191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.916298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.916308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.916480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.916490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.916648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.916658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.916834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.916844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.917013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.917023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.917255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.917265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.917427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.917437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.917597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.917609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.917801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.917811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.917929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.917939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.918118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.918127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.483 qpair failed and we were unable to recover it. 00:26:46.483 [2024-07-15 18:52:02.918220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.483 [2024-07-15 18:52:02.918234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.918404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.918414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.918545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.918555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.918736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.918745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.918863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.918873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.919063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.919073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.919240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.919251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.919416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.919427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.919536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.919546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.919824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.919834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.920005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.920015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.920190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.920200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.920430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.920440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.920629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.920639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.920838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.920848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.920965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.920975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.921075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.921087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.921344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.921354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.921476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.921486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.921596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.921605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.921775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.921785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.921893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.921903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.922103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.922113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.922360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.922371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.922575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.922585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.922856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.922866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.922991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.923001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.923115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.923125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.923323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.923334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.923596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.923606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.923723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.923733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.923908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.923918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.924023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.924033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.924268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.924278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.924504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.924514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.924716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.924726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.924831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.924843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.924937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.924947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.925119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.484 [2024-07-15 18:52:02.925129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.484 qpair failed and we were unable to recover it. 00:26:46.484 [2024-07-15 18:52:02.925252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.925263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.925443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.925453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.925561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.925570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.925654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.925664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.925771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.925781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.925896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.925906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.926068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.926078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.926342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.926352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.926482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.926492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.926601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.926611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.926783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.926793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.926958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.926968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.927133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.927143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.927244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.927254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.927483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.927493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.927597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.927607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.927728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.927738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.927923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.927933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.928099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.928109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.928265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.928275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.928514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.928524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.928635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.928645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.928748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.928758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.928867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.928877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.928985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.928995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.929102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.929112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.929226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.929236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.929398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.929408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.929583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.929593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.929765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.929775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.929870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.929879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.929982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.929992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.930221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.930233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.485 qpair failed and we were unable to recover it. 00:26:46.485 [2024-07-15 18:52:02.930328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.485 [2024-07-15 18:52:02.930339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.930443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.930452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.930622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.930632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.930818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.930828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.930948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.930960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.931078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.931088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.931280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.931291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.931522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.931532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.931653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.931664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.931756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.931767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.931884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.931894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.932077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.932087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.932186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.932197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.932293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.932305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.932465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.932477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.932654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.932664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.932860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.932870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.932984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.932994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.933116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.933127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.933312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.933323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.933518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.933528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.933698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.933708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.933933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.933943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.934055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.934065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.934189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.934198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.934321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.934332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.934558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.934568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.934685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.934695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.934880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.934890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.935014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.935024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.935194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.935204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.935309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.486 [2024-07-15 18:52:02.935320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.486 qpair failed and we were unable to recover it. 00:26:46.486 [2024-07-15 18:52:02.935422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.935432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.935622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.935632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.935716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.935726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.935836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.935846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.936020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.936030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.936215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.936228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.936408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.936418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.936670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.936680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.936842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.936852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.936984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.936994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.937195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.937205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.937387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.937397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.937513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.937525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.937753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.937763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.937885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.937895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.938136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.938146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.938318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.938328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.938445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.938455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.938708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.938718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.938969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.938979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.939204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.939215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.939326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.939337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.939521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.939531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.939636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.939647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.939834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.939844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.940020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.940031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.940260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.940270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.940446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.940457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.940713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.940723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.940837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.940847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.941008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.941018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.941126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.941136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.941301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.941312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.941489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.941499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.941669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.941679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.487 [2024-07-15 18:52:02.941851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.487 [2024-07-15 18:52:02.941861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.487 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.942122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.942131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.942237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.942247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.942457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.942467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.942680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.942690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.942943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.942953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.943074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.943084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.943245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.943255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.943373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.943383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.943576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.943586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.943758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.943768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.943993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.944003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.944122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.944132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.944309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.944319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.944505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.944515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.944619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.944629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.944740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.944749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.944857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.944870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.945032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.945042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.945207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.945218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.945330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.945340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.945454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.945464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.945589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.945599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.945831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.945841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.945953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.945963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.946137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.946147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.946257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.946268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.946426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.946436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.946633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.946643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.946878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.946888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.947054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.947064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.947339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.947350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.947526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.947538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.947641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.947651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.947816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.947826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.948020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.948031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.948209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.948223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.488 qpair failed and we were unable to recover it. 00:26:46.488 [2024-07-15 18:52:02.948353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.488 [2024-07-15 18:52:02.948364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.948529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.948539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.948657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.948666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.948855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.948866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.948964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.948975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.949140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.949150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.949397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.949407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.949582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.949593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.949712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.949722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.949925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.949935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.950098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.950108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.950274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.950286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.950383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.950394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.950515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.950526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.950754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.950764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.950934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.950945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.951111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.951122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.951232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.951243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.951497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.951508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.951691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.951702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.951864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.951876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.952020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.952030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.952213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.952226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.952473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.952483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.952649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.952659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.952758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.952768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.952872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.952881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.953046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.953056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.953242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.953253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.953431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.953442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.953556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.953568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.953682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.953693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.953800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.953811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.953976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.953986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.954101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.954112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.954280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.954291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.954449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.954460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.954573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.954583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.954698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.954709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.954871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.954882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.489 qpair failed and we were unable to recover it. 00:26:46.489 [2024-07-15 18:52:02.955004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.489 [2024-07-15 18:52:02.955015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.955099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.955110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.955241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.955251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.955372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.955383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.955489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.955500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.955693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.955704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.955949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.955960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.956141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.956151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.956334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.956345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.956507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.956518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.956718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.956729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.956970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.956981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.957150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.957160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.957349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.957360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.957553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.957564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.957668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.957680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.957782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.957793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.957967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.957977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.958243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.958255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.958416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.958427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.958615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.958629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.958854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.958864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.959099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.959109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.959275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.959287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.959453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.959464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.959590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.959600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.959772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.959782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.960008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.960020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.960126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.960137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.960307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.490 [2024-07-15 18:52:02.960317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.490 qpair failed and we were unable to recover it. 00:26:46.490 [2024-07-15 18:52:02.960408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.960419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.960546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.960556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.960805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.960815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.960933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.960944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.961059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.961071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.961239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.961250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.961429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.961440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.961599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.961609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.961792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.961802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.961899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.961909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.962004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.962014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.962208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.962218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.962448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.962458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.962555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.962565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.962742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.962753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.962911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.962922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.963023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.963032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.963195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.963206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.963368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.963379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.963539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.963550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.963724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.963735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.963843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.963853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.964035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.964046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.964165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.964176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.964356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.964367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.964558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.964569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.964703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.964714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.964897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.964907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.965072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.965082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.965189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.965200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.965305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.965317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.965480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.965492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.965688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.965699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.965814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.965824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.966001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.966012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.491 qpair failed and we were unable to recover it. 00:26:46.491 [2024-07-15 18:52:02.966175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.491 [2024-07-15 18:52:02.966186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.966306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.966318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.966507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.966517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.966611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.966622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.966815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.966826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.967002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.967013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.967184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.967195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.967395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.967406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.967529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.967540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.967664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.967674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.967844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.967854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.967922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.967932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.968099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.968110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.968215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.968240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.968500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.968511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.968618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.968628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.968792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.968802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.968897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.968908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.969026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.969036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.969153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.969164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.969292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.969304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.969532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.969543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.969644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.969655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.969849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.969859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.970020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.970031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.970211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.970221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.970450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.970461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.970556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.970567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.970817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.970828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.970952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.970962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.971135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.971145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.971333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.971344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.971422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.971432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.971635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.971645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.971873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.971884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.492 [2024-07-15 18:52:02.972084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.492 [2024-07-15 18:52:02.972096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.492 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.972209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.972219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.972328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.972339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.972434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.972444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.972618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.972629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.972789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.972800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.972890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.972900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.973083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.973094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.973190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.973200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.973307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.973318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.973475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.973486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.973601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.973611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.973827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.973839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.974044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.974054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.974219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.974232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.974416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.974428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.974607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.974618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.974789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.974799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.974906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.974917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.975026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.975036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.975215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.975233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.975406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.975416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.975591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.975603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.975714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.975724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.975913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.975923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.976157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.976168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.976346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.976357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.976588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.976599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.976716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.976727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.976846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.976856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.977055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.977065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.977236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.977247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.977505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.977516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.977636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.977647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.977753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.977764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.977881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.977891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.977994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.978005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.978205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.978215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.978387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.493 [2024-07-15 18:52:02.978397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.493 qpair failed and we were unable to recover it. 00:26:46.493 [2024-07-15 18:52:02.978627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.978638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.978843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.978855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.978978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.978989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.979164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.979175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.979294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.979305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.979420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.979430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.979528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.979539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.979654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.979665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.979903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.979913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.980023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.980033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.980141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.980152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.980336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.980347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.980575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.980586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.980701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.980712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.980821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.980831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.981000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.981010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.981126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.981137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.981400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.981413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.981519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.981530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.981635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.981646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.981809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.981820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.982045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.982056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.982152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.982163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.982323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.982334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.982539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.982550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.982780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.982791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.982951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.982962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.983124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.983134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.983305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.983321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.983501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.983512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.983681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.983692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.983935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.983945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.984175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.984185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.984387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.984398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.984512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.984523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.984655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.984666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.984891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.984901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.494 qpair failed and we were unable to recover it. 00:26:46.494 [2024-07-15 18:52:02.985202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.494 [2024-07-15 18:52:02.985213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.985385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.985396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.985513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.985525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.985652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.985663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.985772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.985785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.985885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.985895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.986055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.986065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.986262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.986272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.986385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.986395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.986556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.986566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.986681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.986691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.986886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.986897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.987064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.987074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.987240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.987251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.987367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.987378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.987488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.987498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.987750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.987761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.987933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.987943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.988123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.988135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.988244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.988256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.988374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.988385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.988485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.988495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.988614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.988623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.988735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.988745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.988970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.988980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.989147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.989157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.989320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.989331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.989510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.989520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.989643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.989653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.495 [2024-07-15 18:52:02.989820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.495 [2024-07-15 18:52:02.989831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.495 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.990000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.990010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.990125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.990138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.990255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.990266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.990437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.990448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.990702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.990713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.990884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.990894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.991004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.991014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.991136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.991146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.991322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.991334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.991559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.991569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.991796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.991807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.991993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.992004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.992117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.992127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.992310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.992320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.992491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.992502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.992674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.992685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.992786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.992796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.992983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.992993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.993099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.993110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.993272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.993283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.993406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.993417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.993517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.993528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.993689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.993699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.993883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.993893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.993987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.993999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.994175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.994186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.994347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.994357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.994519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.994530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.994643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.994653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.994788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.994799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.995039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.995049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.995235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.995247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.995348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.995358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.995585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.995596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.995696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.995707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.995799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.995809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.995973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.995984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.996105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.996116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.996279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.996290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.996404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.996414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.996490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.996500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.496 qpair failed and we were unable to recover it. 00:26:46.496 [2024-07-15 18:52:02.996731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.496 [2024-07-15 18:52:02.996744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.996855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.996865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.997067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.997077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.997196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.997207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.997449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.997460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.997698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.997708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.997772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.997781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.997941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.997951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.998175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.998185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.998289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.998300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.998474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.998485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.998647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.998658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.998771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.998782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.998975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.998985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.999206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.999217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.999454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.999465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.999588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.999598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.999771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.999782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:02.999943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:02.999953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.000065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.000077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.000243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.000254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.000388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.000399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.000495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.000505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.000623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.000634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.000718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.000729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.000909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.000919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.001103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.001114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.001227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.001238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.001387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.001398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.001475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.001486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.001612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.001623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.001877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.001888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.002049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.002060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.002245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.002256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.002427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.002439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.002606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.497 [2024-07-15 18:52:03.002616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.497 qpair failed and we were unable to recover it. 00:26:46.497 [2024-07-15 18:52:03.002792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.002803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.002984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.003166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.003176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.003292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.003303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.003398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.003410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.003591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.003602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.003844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.003854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.004037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.004048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.004282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.004299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.004449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.004459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.004584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.004594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.004801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.004812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.004940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.004950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.005128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.005139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.005261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.005271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.005499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.005510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.005673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.005683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.005844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.005855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.005939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.005950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.006113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.006123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.006363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.006375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.006542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.006553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.006659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.006670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.006773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.006784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.006986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.006996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.007081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.007091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.007248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.007260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.007435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.007446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.007557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.007567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.007755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.007766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.007928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.007939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.008055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.008067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.008247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.008258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.008434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.008445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.008614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.008625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.008814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.008824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.009001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.009012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.009194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.009204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.498 [2024-07-15 18:52:03.009320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.498 [2024-07-15 18:52:03.009330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.498 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.009464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.009473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.009568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.009578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.009768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.009778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.009905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.009914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.010148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.010158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.010279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.010291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.010403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.010413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.010640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.010650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.010753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.010763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.010962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.010972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.011077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.011087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.011259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.011269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.011507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.011517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.011622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.011633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.011821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.011831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.012016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.012025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.012264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.012274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.012458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.012468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.012637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.012647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.012727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.012737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.012849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.012859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.012977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.012987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.013097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.013107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.013359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.013369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.013551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.013560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.013682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.013692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.013865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.013875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.014043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.014054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.014190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.014199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.014472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.014483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.014661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.014671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.014857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.014867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.015048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.015058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.015183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.015193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.015367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.015377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.015634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.499 [2024-07-15 18:52:03.015644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.499 qpair failed and we were unable to recover it. 00:26:46.499 [2024-07-15 18:52:03.015847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.015857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.016028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.016038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.016163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.016173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.016404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.016414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.016682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.016692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.016819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.016829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.016954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.016964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.017078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.017088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.017341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.017351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.017578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.017589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.017695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.017705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.017928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.017938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.018114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.018123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.018263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.018273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.018383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.018392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.018639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.018648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.018816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.018826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.019028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.019038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.019270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.019281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.019407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.019416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.019667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.019676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.019847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.019857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.020028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.020038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.020262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.020273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.020554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.020564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.020782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.020792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.021039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.021049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.021295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.021305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.021499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.021617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.500 [2024-07-15 18:52:03.021627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.500 qpair failed and we were unable to recover it. 00:26:46.500 [2024-07-15 18:52:03.021825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.021835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.022062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.022072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.022241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.022251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.022375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.022385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.022514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.022524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.022632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.022642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.022746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.022756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.022915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.022925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.023141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.023151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.023314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.023325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.023396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.023405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.023565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.023575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.023824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.023834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.024110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.024120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.024281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.024291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.024398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.024408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.024573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.024582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.024868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.024878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.025086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.025096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.025290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.025302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.025577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.025587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.025781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.025791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.026056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.026065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.026228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.026239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.026407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.026417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.026695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.026705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.026931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.026941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.027114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.027123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.027236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.027247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.027490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.027500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.027625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.027635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.027810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.027820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.027999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.028009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.028123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.028134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.028373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.028383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.028561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.028571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.028730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.028739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.501 [2024-07-15 18:52:03.028915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.501 [2024-07-15 18:52:03.028925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.501 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.029173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.029184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.029355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.029365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.029471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.029481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.029738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.029748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.029973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.029983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.030159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.030169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.030288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.030298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.030470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.030481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.030644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.030654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.030773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.030783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.030941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.030950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.031114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.031124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.031371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.031381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.031562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.031572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.031747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.031758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.031932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.031943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.032056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.032065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.032244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.032255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.032502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.032512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.032723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.032733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.032944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.032954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.033120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.033132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.033317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.033327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.502 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:46.502 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.502 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.502 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.502 [2024-07-15 18:52:03.034344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.034368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.034660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.034673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.034920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.034931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.035108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.035118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.035307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.035318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.035482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.035493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.035692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.035703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.035801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.035811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.036019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.036029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.036247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.036258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.036428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.036439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.502 [2024-07-15 18:52:03.036570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.502 [2024-07-15 18:52:03.036580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.502 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.036757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.036768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.036995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.037005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.037109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.037120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.037281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.037294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.037460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.037471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.037581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.037592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.037826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.037837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.037939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.037948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.038151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.038161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.038264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.038274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.038447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.038457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.038582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.038592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.038774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.038786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.039036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.039047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.039163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.039175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.039297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.039307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.039406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.039418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.039525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.039536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.039711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.039723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.039825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.039835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.040001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.040012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.040112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.040123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.040306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.040317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.040491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.040502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.040609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.040623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.040792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.040804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.040982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.040993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.041254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.041265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.041361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.041371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.041475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.041485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.041690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.041699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.041931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.041941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.042168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.042179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.042305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.042315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.042474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.042483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.042676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.042687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.042870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.042881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.043013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.043024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.043160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.043170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.503 [2024-07-15 18:52:03.043297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.503 [2024-07-15 18:52:03.043308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.503 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.043410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.043421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.043590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.043600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.043722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.043733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.043838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.043850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.044973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.044985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.045150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.045162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.045282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.045294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.045394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.045404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.045609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.045621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.045786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.045796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.045915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.045924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.046038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.046048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.046148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.046159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.046335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.046345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.046467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.046478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.046575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.046585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.046681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.046692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.046852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.046866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.047042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.047212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.047348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.047469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.047588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.047779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.504 [2024-07-15 18:52:03.047880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.504 qpair failed and we were unable to recover it. 00:26:46.504 [2024-07-15 18:52:03.047972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.047982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.048944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.048955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.049064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.049074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.049172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.049182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.049365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.049376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.049540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.049551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.049730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.049741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.049847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.049857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.050036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.050047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.050194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.050204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.050322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.050333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.050434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.050445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.050558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.050593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.050769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.050785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.050968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.050982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.051114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.051129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.051299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.051314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.051436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.051450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.051572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.051586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.051761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.051774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.051879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.051893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.052054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.052068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.052179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.052194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.052304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.052319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.052422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.052437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.052615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.052634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.052752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.052765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.052870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.052883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.505 [2024-07-15 18:52:03.053002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.505 [2024-07-15 18:52:03.053016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.505 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.053191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.053319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.053439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.053569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.053709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.053788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.053901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.053993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.054912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.054922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.055892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.055907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.056112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.056195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.056210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.056328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.056343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.056517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.056531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.056642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.056656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.056779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.056793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.056861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.056875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.057058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.057073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.057198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.057211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7b8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.057412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.057443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.057623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.057638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.057806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.057820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.057897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.057911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.058035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.058049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.058154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.058168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.506 [2024-07-15 18:52:03.058275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.506 [2024-07-15 18:52:03.058289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.506 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.058484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.058499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.058602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.058616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.058687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.058701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.058807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.058821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.058922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.058935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.059038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.059052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.059166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.059180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.059292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.059307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.059424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.059438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.059538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.059552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.059728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.059743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.059914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.059927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.060101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.060115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.060288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.060302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.060480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.060495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.060598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.060611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.060700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.060714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.060819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.060833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.060938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.060952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.061056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.061071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.061190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.061203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.061312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.061326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.061445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.061459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.061573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.061589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.061762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.061776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.061883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.061896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.062013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.062134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.062321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.062453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.062577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.062694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.062816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.062997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.063012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.063113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.063127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.063233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.063248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.063361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.063376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.063503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.063518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.063711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.507 [2024-07-15 18:52:03.063726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.507 qpair failed and we were unable to recover it. 00:26:46.507 [2024-07-15 18:52:03.063914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.063927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.064984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.064998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.065131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.065145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.065265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.065280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.065427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.065440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.065545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.065560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.065729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.065743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.065850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.065865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.066039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.066053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.066254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.066269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.066389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.066404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.066509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.066522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.066708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.066723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.066824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.066838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.067078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.067092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.067328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.067342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.067530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.067543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.067718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.067735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.508 [2024-07-15 18:52:03.068034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.068052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.508 [2024-07-15 18:52:03.068243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.068259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.068494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.068509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.508 [2024-07-15 18:52:03.068652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.068667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.508 [2024-07-15 18:52:03.068853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.068869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.069050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.069063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.069316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.069331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.069521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.069536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.069651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.069665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.069938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.069952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.070087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.070100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.508 [2024-07-15 18:52:03.070298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.508 [2024-07-15 18:52:03.070312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.508 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.070423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.070438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.070695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.070709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.070950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.070964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.071131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.071145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.071369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.071384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.071516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.071530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.071766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.071779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.072015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.072029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.072303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.072317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.072506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.072520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.072643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.072656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.072843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.072858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.073108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.073122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.073380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.073394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.073517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.073530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.073673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.073686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.073949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.073963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.074138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.074152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.074444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.074458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.074644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.074658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.074774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.074788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.074989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.075003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.075180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.075193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.075352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.075367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.075485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.075499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.075671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.075687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.075795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.075809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.076077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.076091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.076329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.509 [2024-07-15 18:52:03.076344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.509 qpair failed and we were unable to recover it. 00:26:46.509 [2024-07-15 18:52:03.076530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.076544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.076673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.076687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.076896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.076910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.077146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.077160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.077345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.077359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.077485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.077499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.077681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.077695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.077906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.077920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.078175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.078189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.078354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.078369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.078498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.078511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.078711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.078726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.078853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.078867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.079150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.079164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.079359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.079373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.079539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.079552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.079741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.079755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.079950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.079964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.080140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.080154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.080405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.080428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.080665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.080680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.080924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.080938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.081121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.081135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.081386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.081401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.081606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.081627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.081761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.081776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.510 qpair failed and we were unable to recover it. 00:26:46.510 [2024-07-15 18:52:03.081972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.510 [2024-07-15 18:52:03.081986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.082181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.082196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.082409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.082424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.082660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.082674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.082797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.082812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.083005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.083020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.083291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.083308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.083526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.083540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.083745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.083760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.084039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.084055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.084232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.084250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.084384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.084398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.084535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.084549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.084689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.084703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.084830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.084845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.084979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.084994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.085133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.085148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.085399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.085414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.085594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.085608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.085745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.085759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.086061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.086075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.086280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.086295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.086468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.086482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.086670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.086683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.086874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.086888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.087124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.087138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.087320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.087334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.087539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.087553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 Malloc0 00:26:46.511 [2024-07-15 18:52:03.087698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.087714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.088020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.088034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.088164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.088180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.511 [2024-07-15 18:52:03.088430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.088446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:46.511 [2024-07-15 18:52:03.088571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.088587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.088702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.088715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.088855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.511 [2024-07-15 18:52:03.088870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.088993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.089007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.511 [2024-07-15 18:52:03.089183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.089199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.089404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.511 [2024-07-15 18:52:03.089423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.511 qpair failed and we were unable to recover it. 00:26:46.511 [2024-07-15 18:52:03.089549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.089559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.089664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.089675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.089858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.089869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.089997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.090011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.090108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.090118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.090292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.090302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.090476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.090486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.090597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.090608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.090794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.090803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.090966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.090976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.091043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.091054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.091154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.091164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.091280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.091291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.091405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.091415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.091676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.091686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.091812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.091821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.091937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.091947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.092056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.092066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.092247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.092257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.092372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.092382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.092553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.092563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.092747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.092756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.092876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.092886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.093055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.093064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.093270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.093281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.093452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.093462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.093714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.093724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.093969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.093979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.094233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.094244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.094429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.094439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.094601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.094611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.094725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.094735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.095026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.095035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.095263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.095273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.095381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.512 [2024-07-15 18:52:03.095435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.095445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.095618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.095628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.095750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.095760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.096017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.096030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.096204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.096214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.512 [2024-07-15 18:52:03.096435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.512 [2024-07-15 18:52:03.096446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.512 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.096605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.096615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.096732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.096742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.096920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.096930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.097134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.097145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.097267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.097277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.097462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.097472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.097584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.097594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.097711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.097721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.097824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.097833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.098023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.098032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.098288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.098299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.098476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.098486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.098684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.098694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.098902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.098912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.099191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.099201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.099404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.099414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.099547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.099557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.099785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.099795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.099902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.099912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.100081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.100091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.100262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.100272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.100501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.100511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.100691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.100701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.100816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.100826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.101080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.101090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.101212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.101221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.101484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.101494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.101684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.101693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.101973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.101983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.102229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.102239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.102468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.102478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.102671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.102681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.102802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.102812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.102925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.102935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.103187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.103197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.513 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.513 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.513 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.513 [2024-07-15 18:52:03.104113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.104135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.104396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.104408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.513 [2024-07-15 18:52:03.104662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.513 [2024-07-15 18:52:03.104673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.513 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.104802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.104812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.104964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.104974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.105229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.105240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.105503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.105513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.105745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.105755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.106002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.106011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.106143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.106153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.106426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.106436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.106555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.106565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.106836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.106847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.107090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.107100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.107261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.107271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.107395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.107405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.107533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.107543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.107710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.107720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.107952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.107962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.108231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.108241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.108454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.108464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.108715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.108725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.108914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.108924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.109122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.109132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.109391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.109402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.109658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.109669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.109843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.109853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.110030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.110040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.110269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.110279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.110535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.110545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.110807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.110817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.111051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.111061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-07-15 18:52:03.111192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.111203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.514 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.514 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.514 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.514 [2024-07-15 18:52:03.111906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-07-15 18:52:03.111923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.112246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.112258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.112488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.112499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.112674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.112684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.112960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.112970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.113161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.113171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.113352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.113363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.113599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.113609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.113854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.113864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.114056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.114066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.114195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.114205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.114397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.114408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.114649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.114659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.114821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.114832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.115103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.115113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.115229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.115240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.115482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.115492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.115658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.115668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.115931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.115941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.116163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.116193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xebded0 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.116492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.116509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.116697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.116711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.116969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.116983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.117174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.117188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.117414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.117429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.117632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.117645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.117883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.117896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.118086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.118099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.118338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.118352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.118566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.118580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.118762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.118776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.118956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.118970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.119151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.119171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.119359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.119373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.119499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.119513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.515 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.515 [2024-07-15 18:52:03.119696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.119711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.515 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 [2024-07-15 18:52:03.119896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.119911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c8000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.120177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.120195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-07-15 18:52:03.120433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-07-15 18:52:03.120445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.120688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.120699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.120856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.120866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.121111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.121121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.121250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.121260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.121368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.121378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.121608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.121618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.121781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.121791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.121898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.121908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-07-15 18:52:03.122144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-07-15 18:52:03.122155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.776 [2024-07-15 18:52:03.122347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.776 [2024-07-15 18:52:03.122358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.776 qpair failed and we were unable to recover it. 00:26:46.776 [2024-07-15 18:52:03.122599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.776 [2024-07-15 18:52:03.122610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.776 qpair failed and we were unable to recover it. 00:26:46.776 [2024-07-15 18:52:03.122694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.776 [2024-07-15 18:52:03.122706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.776 qpair failed and we were unable to recover it. 00:26:46.776 [2024-07-15 18:52:03.122880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.776 [2024-07-15 18:52:03.122890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.776 qpair failed and we were unable to recover it. 00:26:46.776 [2024-07-15 18:52:03.123023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.776 [2024-07-15 18:52:03.123033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.776 qpair failed and we were unable to recover it. 00:26:46.776 [2024-07-15 18:52:03.123234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.776 [2024-07-15 18:52:03.123244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.776 qpair failed and we were unable to recover it. 00:26:46.776 [2024-07-15 18:52:03.123461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.776 [2024-07-15 18:52:03.123471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff7c0000b90 with addr=10.0.0.2, port=4420 00:26:46.776 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.123603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.777 [2024-07-15 18:52:03.125902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.126020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.126039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.126047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.126056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.126075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.777 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:46.777 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.777 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.777 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.777 18:52:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1248606 00:26:46.777 [2024-07-15 18:52:03.135907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.135979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.135995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.136002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.136008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.136024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.145861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.145932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.145947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.145954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.145960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.145975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.155878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.155953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.155968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.155975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.155980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.155995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.165895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.165963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.165981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.165988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.165993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.166008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.175928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.175991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.176008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.176015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.176021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.176036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.185968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.186052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.186067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.186074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.186080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.186096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.195973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.196044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.196062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.196069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.196075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.196089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.206013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.206081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.206096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.206103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.206109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.206125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.216035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.216094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.216110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.216117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.216123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.216137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.226104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.226218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.226239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.226246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.226253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.226268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.236099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.236166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.236180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.236187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.236192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.236207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.246132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.246201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.246216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.246223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.246233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.246248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.256175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.256252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.256267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.256274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.256279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.256294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.266205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.266275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.266290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.266297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.266303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.266317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.276143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.276208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.276223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.276235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.276241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.276256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.286258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.286323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.286339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.286345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.286352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.286367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.296265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.296335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.296351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.296358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.296367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.296383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.306288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.306349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.306364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.777 [2024-07-15 18:52:03.306371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.777 [2024-07-15 18:52:03.306377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.777 [2024-07-15 18:52:03.306393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.777 qpair failed and we were unable to recover it. 00:26:46.777 [2024-07-15 18:52:03.316253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.777 [2024-07-15 18:52:03.316318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.777 [2024-07-15 18:52:03.316332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.316339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.316344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.316359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.326385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.326461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.326475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.326482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.326487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.326501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.336473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.336565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.336579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.336585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.336591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.336605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.346396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.346465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.346480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.346486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.346493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.346506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.356442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.356512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.356526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.356533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.356538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.356552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.366499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.366569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.366584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.366591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.366596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.366611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.376578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.376674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.376688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.376695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.376700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.376714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.386517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.386582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.386596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.386606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.386612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.386627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.396600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.396675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.396689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.396696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.396702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.396716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.406648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.406717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.406731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.406738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.406744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.406758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.416577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.416644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.416658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.416665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.416671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.416685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.426645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.426707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.426721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.426727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.426733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.426747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.436662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.436730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.436744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.436751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.436757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.436771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.446684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.446751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.446766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.446773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.446778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.446793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.456690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.456765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.456779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.456786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.456792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.456806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.466766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.466832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.466846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.466853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.466858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.466873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:46.778 [2024-07-15 18:52:03.476806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.778 [2024-07-15 18:52:03.476878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.778 [2024-07-15 18:52:03.476898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.778 [2024-07-15 18:52:03.476908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.778 [2024-07-15 18:52:03.476914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:46.778 [2024-07-15 18:52:03.476929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.778 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.486823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.486897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.486913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.486920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.486926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.486941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.496787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.496852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.496867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.496874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.496880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.496894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.506818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.506883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.506898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.506904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.506910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.506925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.516838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.516901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.516916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.516923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.516928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.516946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.526931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.527001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.527017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.527024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.527030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.527044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.536905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.536999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.537013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.537020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.537025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.537040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.546983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.547044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.547059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.547065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.547071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.547085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.557005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.557071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.557085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.557092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.557097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.557112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.567099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.567209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.567232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.567239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.567245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.567260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.577001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.577067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.577081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.577088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.577094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.577108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.587071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.587132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.587146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.587153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.587159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.587173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.597078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.597139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.597154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.597161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.597167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.597181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.607161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.607239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.607254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.607260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.607266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.607283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.617202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.617269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.617283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.617290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.617295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.617310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.627245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.627313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.627327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.627334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.627340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.627354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.637254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.637320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.637334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.637341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.637346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.637360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.647283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.647349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.647363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.647370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.647376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.647391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.657359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.657422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.657439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.657446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.657452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.657466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.667325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.667387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.667401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.667408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.667414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.038 [2024-07-15 18:52:03.667428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.038 qpair failed and we were unable to recover it. 00:26:47.038 [2024-07-15 18:52:03.677362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.038 [2024-07-15 18:52:03.677425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.038 [2024-07-15 18:52:03.677439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.038 [2024-07-15 18:52:03.677445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.038 [2024-07-15 18:52:03.677451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.039 [2024-07-15 18:52:03.677465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.039 qpair failed and we were unable to recover it. 00:26:47.039 [2024-07-15 18:52:03.687403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.039 [2024-07-15 18:52:03.687469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.039 [2024-07-15 18:52:03.687484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.039 [2024-07-15 18:52:03.687490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.039 [2024-07-15 18:52:03.687496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.039 [2024-07-15 18:52:03.687510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.039 qpair failed and we were unable to recover it. 00:26:47.039 [2024-07-15 18:52:03.697428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.039 [2024-07-15 18:52:03.697529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.039 [2024-07-15 18:52:03.697543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.039 [2024-07-15 18:52:03.697550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.039 [2024-07-15 18:52:03.697559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.039 [2024-07-15 18:52:03.697574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.039 qpair failed and we were unable to recover it. 00:26:47.039 [2024-07-15 18:52:03.707434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.039 [2024-07-15 18:52:03.707519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.039 [2024-07-15 18:52:03.707533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.039 [2024-07-15 18:52:03.707540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.039 [2024-07-15 18:52:03.707545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.039 [2024-07-15 18:52:03.707560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.039 qpair failed and we were unable to recover it. 00:26:47.039 [2024-07-15 18:52:03.717482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.039 [2024-07-15 18:52:03.717547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.039 [2024-07-15 18:52:03.717562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.039 [2024-07-15 18:52:03.717568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.039 [2024-07-15 18:52:03.717573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.039 [2024-07-15 18:52:03.717587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.039 qpair failed and we were unable to recover it. 00:26:47.039 [2024-07-15 18:52:03.727512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.039 [2024-07-15 18:52:03.727575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.039 [2024-07-15 18:52:03.727590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.039 [2024-07-15 18:52:03.727596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.039 [2024-07-15 18:52:03.727602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.039 [2024-07-15 18:52:03.727616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.039 qpair failed and we were unable to recover it. 00:26:47.039 [2024-07-15 18:52:03.737529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.039 [2024-07-15 18:52:03.737627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.039 [2024-07-15 18:52:03.737642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.039 [2024-07-15 18:52:03.737649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.039 [2024-07-15 18:52:03.737655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.039 [2024-07-15 18:52:03.737670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.039 qpair failed and we were unable to recover it. 00:26:47.298 [2024-07-15 18:52:03.747582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.298 [2024-07-15 18:52:03.747650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.298 [2024-07-15 18:52:03.747665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.298 [2024-07-15 18:52:03.747672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.298 [2024-07-15 18:52:03.747678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.298 [2024-07-15 18:52:03.747692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.298 qpair failed and we were unable to recover it. 00:26:47.298 [2024-07-15 18:52:03.757586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.298 [2024-07-15 18:52:03.757648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.298 [2024-07-15 18:52:03.757662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.757669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.757675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.757689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.767627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.767690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.767705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.767711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.767717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.767732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.777613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.777682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.777696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.777703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.777709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.777723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.787681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.787742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.787757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.787766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.787772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.787787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.797700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.797761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.797775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.797781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.797787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.797801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.807739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.807799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.807813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.807820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.807825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.807840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.817763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.817825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.817839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.817846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.817851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.817866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.827790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.827858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.827872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.827878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.827884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.827898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.837835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.837902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.837917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.837923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.837929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.837943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.847831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.847915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.847929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.847936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.847941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.847955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.857894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.857960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.857974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.857981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.857987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.858002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.867928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.867994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.868009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.868016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.868021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.868036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.877977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.878050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.878065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.878074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.878080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.878095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.888022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.888130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.888149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.888156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.299 [2024-07-15 18:52:03.888162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.299 [2024-07-15 18:52:03.888177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.299 qpair failed and we were unable to recover it. 00:26:47.299 [2024-07-15 18:52:03.897989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.299 [2024-07-15 18:52:03.898053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.299 [2024-07-15 18:52:03.898067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.299 [2024-07-15 18:52:03.898074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.898079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.898094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.908014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.908076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.908091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.908097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.908103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.908117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.918036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.918097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.918111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.918118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.918123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.918138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.928071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.928134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.928149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.928156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.928162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.928176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.938086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.938158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.938172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.938179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.938185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.938198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.948112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.948174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.948188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.948194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.948200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.948215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.958160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.958228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.958243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.958249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.958255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.958269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.968227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.968342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.968361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.968367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.968373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.968387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.978243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.978348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.978389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.978396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.978402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.978417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.988279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.988347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.988361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.988368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.988374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.988388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.300 [2024-07-15 18:52:03.998280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.300 [2024-07-15 18:52:03.998345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.300 [2024-07-15 18:52:03.998358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.300 [2024-07-15 18:52:03.998365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.300 [2024-07-15 18:52:03.998371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.300 [2024-07-15 18:52:03.998386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.300 qpair failed and we were unable to recover it. 00:26:47.559 [2024-07-15 18:52:04.008320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.559 [2024-07-15 18:52:04.008397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.559 [2024-07-15 18:52:04.008412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.559 [2024-07-15 18:52:04.008419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.559 [2024-07-15 18:52:04.008425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.559 [2024-07-15 18:52:04.008443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.559 qpair failed and we were unable to recover it. 00:26:47.559 [2024-07-15 18:52:04.018285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.559 [2024-07-15 18:52:04.018343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.559 [2024-07-15 18:52:04.018357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.559 [2024-07-15 18:52:04.018363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.559 [2024-07-15 18:52:04.018369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.018384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.028373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.028439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.028453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.028459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.028465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.028479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.038396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.038457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.038472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.038478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.038484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.038498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.048425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.048489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.048504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.048510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.048516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.048530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.058458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.058522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.058540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.058546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.058552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.058566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.068503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.068566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.068582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.068588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.068594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.068608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.078497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.078558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.078573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.078579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.078586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.078599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.088567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.088630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.088645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.088651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.088657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.088671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.098532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.098594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.098608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.098614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.098624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.098637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.108604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.108673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.108687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.108694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.108700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.108714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.118615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.118682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.118696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.118703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.118709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.118723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.128655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.128724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.128739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.128745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.128751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.128766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.138687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.138755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.138769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.138776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.138782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.138795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.148713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.148821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.148840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.148847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.148853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.148869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.158732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.158794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.158809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.158815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.158822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.158836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.168772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.168838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.168852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.168859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.168865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.168879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.178732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.178828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.178843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.178849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.178855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.178870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.188809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.188869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.188884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.188893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.188899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.188913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.198778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.198842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.198857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.198863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.198869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.198883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.208880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.208942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.208957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.208963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.208969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.208983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.218916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.218979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.218993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.219000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.219005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.219019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.228925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.228988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.229004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.229011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.229017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.229031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.238979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.239042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.239057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.239063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.239069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.239084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.248983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.249046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.249061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.249067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.249073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.249087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.560 [2024-07-15 18:52:04.259040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.560 [2024-07-15 18:52:04.259114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.560 [2024-07-15 18:52:04.259129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.560 [2024-07-15 18:52:04.259135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.560 [2024-07-15 18:52:04.259141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.560 [2024-07-15 18:52:04.259155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.560 qpair failed and we were unable to recover it. 00:26:47.820 [2024-07-15 18:52:04.269073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.820 [2024-07-15 18:52:04.269138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.820 [2024-07-15 18:52:04.269153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.820 [2024-07-15 18:52:04.269160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.820 [2024-07-15 18:52:04.269166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.820 [2024-07-15 18:52:04.269180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-07-15 18:52:04.279087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.820 [2024-07-15 18:52:04.279155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.820 [2024-07-15 18:52:04.279169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.820 [2024-07-15 18:52:04.279179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.820 [2024-07-15 18:52:04.279184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.820 [2024-07-15 18:52:04.279198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-07-15 18:52:04.289129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.820 [2024-07-15 18:52:04.289237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.820 [2024-07-15 18:52:04.289251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.820 [2024-07-15 18:52:04.289258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.820 [2024-07-15 18:52:04.289263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.820 [2024-07-15 18:52:04.289278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-07-15 18:52:04.299168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.820 [2024-07-15 18:52:04.299235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.820 [2024-07-15 18:52:04.299250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.299256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.299262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.299276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.309200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.309273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.309288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.309294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.309300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.309314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.319231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.319304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.319318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.319325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.319330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.319345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.329312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.329413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.329428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.329434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.329440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.329455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.339265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.339325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.339339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.339345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.339351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.339365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.349309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.349389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.349404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.349410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.349416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.349431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.359314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.359381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.359396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.359402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.359408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.359421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.369337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.369398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.369415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.369422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.369428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.369442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.379375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.379436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.379450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.379457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.379463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.379477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.389340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.389403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.389417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.389424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.389430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.389444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.399445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.399509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.399523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.399530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.399536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.399550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.409465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.409533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.409547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.409554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.409560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.409577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.419536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.419601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.419615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.419621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.419627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.419642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.429541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.429604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.821 [2024-07-15 18:52:04.429619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.821 [2024-07-15 18:52:04.429625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.821 [2024-07-15 18:52:04.429631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.821 [2024-07-15 18:52:04.429645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-07-15 18:52:04.439549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.821 [2024-07-15 18:52:04.439608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.439622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.439629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.439634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.439649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.449608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.449671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.449685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.449692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.449697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.449711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.459610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.459670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.459686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.459693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.459699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.459712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.469619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.469680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.469695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.469701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.469707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.469721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.479656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.479724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.479738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.479745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.479751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.479765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.489745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.489811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.489825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.489831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.489837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.489851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.499740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.499800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.499814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.499820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.499829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.499843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.509717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.509784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.509798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.509805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.509810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.509824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-07-15 18:52:04.519777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.822 [2024-07-15 18:52:04.519845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.822 [2024-07-15 18:52:04.519859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.822 [2024-07-15 18:52:04.519865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.822 [2024-07-15 18:52:04.519871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:47.822 [2024-07-15 18:52:04.519886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.822 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.529805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.529873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.529888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.529894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.529900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.529915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.539851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.539917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.539932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.539938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.539944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.539958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.549810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.549872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.549886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.549893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.549899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.549913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.559902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.559971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.559985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.559992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.559998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.560012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.569913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.569975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.569989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.569996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.570002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.570016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.580006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.580113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.580129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.580135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.580141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.580156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.589994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.590058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.590073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.590079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.590091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.590105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.600076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.600156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.600170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.600176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.600182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.600196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.610017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.610121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.610135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.610141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.610147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.610161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.620011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.620076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.620090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.620097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.620103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.620117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.630084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.630149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.630163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.630170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.630176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.630190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.640120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.640185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.640199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.640205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.640211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.640231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.650223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.650303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.650317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.650324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.650329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.650344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.660130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.660194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.660208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.660214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.660220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.660237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.670253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.670318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.670333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.670340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.670346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.670361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.680247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.680312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.680327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.680337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.680342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.680357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.690200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.690307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.690330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.690337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.690343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.690358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.700296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.083 [2024-07-15 18:52:04.700358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.083 [2024-07-15 18:52:04.700372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.083 [2024-07-15 18:52:04.700379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.083 [2024-07-15 18:52:04.700384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.083 [2024-07-15 18:52:04.700398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.083 qpair failed and we were unable to recover it. 00:26:48.083 [2024-07-15 18:52:04.710275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.710342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.710356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.710363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.710368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.710383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.084 [2024-07-15 18:52:04.720299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.720369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.720384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.720391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.720396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.720410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.084 [2024-07-15 18:52:04.730317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.730383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.730397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.730403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.730410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.730424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.084 [2024-07-15 18:52:04.740359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.740424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.740438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.740444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.740450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.740464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.084 [2024-07-15 18:52:04.750458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.750522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.750536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.750543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.750549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.750563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.084 [2024-07-15 18:52:04.760475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.760538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.760552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.760559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.760564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.760578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.084 [2024-07-15 18:52:04.770422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.770486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.770504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.770510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.770516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.770530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.084 [2024-07-15 18:52:04.780503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.084 [2024-07-15 18:52:04.780568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.084 [2024-07-15 18:52:04.780582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.084 [2024-07-15 18:52:04.780589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.084 [2024-07-15 18:52:04.780594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.084 [2024-07-15 18:52:04.780608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.084 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.790559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.790660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.790674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.790681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.790688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.790702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.800571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.800636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.800651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.800658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.800663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.800678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.810530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.810595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.810610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.810616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.810622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.810639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.820572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.820635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.820649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.820656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.820662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.820676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.830663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.830726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.830740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.830748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.830753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.830768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.840629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.840696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.840710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.840717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.840723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.840737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.850753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.850822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.850835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.850842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.850848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.850862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.860717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.860783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.860800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.860807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.860812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.860828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.870783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.870849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.870864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.870871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.870877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.870891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.880749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.880861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.880877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.880884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.880890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.880905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.890778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.890843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.890857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.345 [2024-07-15 18:52:04.890864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.345 [2024-07-15 18:52:04.890869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.345 [2024-07-15 18:52:04.890883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.345 qpair failed and we were unable to recover it. 00:26:48.345 [2024-07-15 18:52:04.900883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.345 [2024-07-15 18:52:04.900940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.345 [2024-07-15 18:52:04.900955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.900961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.900967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.900984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.910847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.910906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.910920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.910927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.910932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.910947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.920926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.920991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.921005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.921011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.921017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.921031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.930962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.931024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.931038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.931044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.931050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.931065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.941021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.941081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.941096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.941102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.941108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.941121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.950955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.951020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.951035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.951041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.951047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.951061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.961040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.961107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.961122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.961128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.961134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.961148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.971059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.971126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.971141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.971147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.971153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.971167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.981140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.981205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.981219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.981230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.981236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.981250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:04.991199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:04.991266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:04.991280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:04.991286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:04.991295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:04.991310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:05.001158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:05.001231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:05.001246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:05.001253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:05.001259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:05.001273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:05.011217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:05.011290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:05.011304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:05.011311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:05.011316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:05.011331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:05.021200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:05.021264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:05.021279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:05.021285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:05.021291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:05.021305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:05.031275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:05.031341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:05.031356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:05.031362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:05.031368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:05.031383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.346 [2024-07-15 18:52:05.041278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.346 [2024-07-15 18:52:05.041343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.346 [2024-07-15 18:52:05.041358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.346 [2024-07-15 18:52:05.041364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.346 [2024-07-15 18:52:05.041370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.346 [2024-07-15 18:52:05.041384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.346 qpair failed and we were unable to recover it. 00:26:48.607 [2024-07-15 18:52:05.051352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.607 [2024-07-15 18:52:05.051421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.607 [2024-07-15 18:52:05.051436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.607 [2024-07-15 18:52:05.051443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.607 [2024-07-15 18:52:05.051449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.607 [2024-07-15 18:52:05.051464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.607 qpair failed and we were unable to recover it. 00:26:48.607 [2024-07-15 18:52:05.061337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.607 [2024-07-15 18:52:05.061398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.607 [2024-07-15 18:52:05.061412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.607 [2024-07-15 18:52:05.061419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.607 [2024-07-15 18:52:05.061424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.607 [2024-07-15 18:52:05.061439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.607 qpair failed and we were unable to recover it. 00:26:48.607 [2024-07-15 18:52:05.071356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.607 [2024-07-15 18:52:05.071422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.607 [2024-07-15 18:52:05.071437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.607 [2024-07-15 18:52:05.071443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.607 [2024-07-15 18:52:05.071449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.607 [2024-07-15 18:52:05.071463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.607 qpair failed and we were unable to recover it. 00:26:48.607 [2024-07-15 18:52:05.081396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.607 [2024-07-15 18:52:05.081459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.607 [2024-07-15 18:52:05.081473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.607 [2024-07-15 18:52:05.081483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.607 [2024-07-15 18:52:05.081490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.607 [2024-07-15 18:52:05.081506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.607 qpair failed and we were unable to recover it. 00:26:48.607 [2024-07-15 18:52:05.091400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.607 [2024-07-15 18:52:05.091462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.607 [2024-07-15 18:52:05.091477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.607 [2024-07-15 18:52:05.091483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.607 [2024-07-15 18:52:05.091489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.607 [2024-07-15 18:52:05.091504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.607 qpair failed and we were unable to recover it. 00:26:48.607 [2024-07-15 18:52:05.101460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.607 [2024-07-15 18:52:05.101517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.607 [2024-07-15 18:52:05.101532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.607 [2024-07-15 18:52:05.101538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.607 [2024-07-15 18:52:05.101544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.607 [2024-07-15 18:52:05.101559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.607 qpair failed and we were unable to recover it. 00:26:48.607 [2024-07-15 18:52:05.111475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.111537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.111551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.111557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.111563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.111577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.121505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.121573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.121588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.121594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.121600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.121614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.131572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.131683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.131698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.131704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.131711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.131725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.141569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.141633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.141647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.141653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.141659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.141674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.151623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.151689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.151703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.151710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.151716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.151730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.161621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.161684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.161698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.161704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.161710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.161725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.171652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.171724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.171741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.171748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.171754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.171768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.181704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.181792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.181806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.181813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.181819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.181833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.191718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.191821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.191835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.191842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.191848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.191864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.201733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.201798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.201813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.201819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.201825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.201839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.211773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.211837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.211851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.211857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.211863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.211880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.221838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.221894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.221909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.221915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.221921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.221935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.231822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.231884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.231899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.231905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.231912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.231926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.241859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.241924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.241938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.241944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.241950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.241964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.251904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.251973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.251987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.251993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.251999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.252014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.261885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.261949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.261967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.261973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.261979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.261993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.271935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.272000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.272015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.272021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.272027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.272041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.281968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.282031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.282045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.282051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.282057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.282071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.292032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.608 [2024-07-15 18:52:05.292099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.608 [2024-07-15 18:52:05.292113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.608 [2024-07-15 18:52:05.292120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.608 [2024-07-15 18:52:05.292125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.608 [2024-07-15 18:52:05.292139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.608 qpair failed and we were unable to recover it. 00:26:48.608 [2024-07-15 18:52:05.302018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.609 [2024-07-15 18:52:05.302082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.609 [2024-07-15 18:52:05.302096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.609 [2024-07-15 18:52:05.302102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.609 [2024-07-15 18:52:05.302108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.609 [2024-07-15 18:52:05.302128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.609 qpair failed and we were unable to recover it. 00:26:48.609 [2024-07-15 18:52:05.312064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.609 [2024-07-15 18:52:05.312129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.609 [2024-07-15 18:52:05.312143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.609 [2024-07-15 18:52:05.312150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.609 [2024-07-15 18:52:05.312156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.609 [2024-07-15 18:52:05.312170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.609 qpair failed and we were unable to recover it. 00:26:48.869 [2024-07-15 18:52:05.322080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.869 [2024-07-15 18:52:05.322143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.322158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.322164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.322170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.322185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.332120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.332206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.332220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.332230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.332236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.332250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.342159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.342220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.342237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.342244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.342249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.342263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.352182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.352246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.352265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.352272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.352277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.352292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.362195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.362264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.362278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.362284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.362290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.362304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.372577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.372682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.372696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.372703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.372709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.372723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.382289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.382392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.382414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.382420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.382426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.382440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.392330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.392405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.392419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.392426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.392435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.392449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.402352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.402421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.402435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.402442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.402448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.402462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.412328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.412407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.412421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.412428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.412433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.412447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.422387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.422449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.422463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.422469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.422475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.422490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.432399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.432463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.432478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.432484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.432490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.432504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.442427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.442496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.442510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.442517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.442523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.442537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.452488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.870 [2024-07-15 18:52:05.452558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.870 [2024-07-15 18:52:05.452572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.870 [2024-07-15 18:52:05.452579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.870 [2024-07-15 18:52:05.452585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.870 [2024-07-15 18:52:05.452600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.870 qpair failed and we were unable to recover it. 00:26:48.870 [2024-07-15 18:52:05.462504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.462570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.462585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.462592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.462598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.462612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.472527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.472638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.472653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.472660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.472665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.472681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.482556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.482619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.482633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.482642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.482648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.482663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.492561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.492627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.492641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.492648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.492654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.492668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.502603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.502706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.502720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.502727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.502732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.502747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.512657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.512718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.512732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.512738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.512744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.512758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.522664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.522730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.522745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.522751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.522757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.522771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.532662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.532724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.532738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.532744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.532750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.532764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.542742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.542819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.542834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.542840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.542846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.542860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.552761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.552826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.552840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.552846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.552852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.552866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.562778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.562844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.562858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.562865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.562871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.562884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:48.871 [2024-07-15 18:52:05.572818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.871 [2024-07-15 18:52:05.572877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.871 [2024-07-15 18:52:05.572891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.871 [2024-07-15 18:52:05.572900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.871 [2024-07-15 18:52:05.572906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:48.871 [2024-07-15 18:52:05.572920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.871 qpair failed and we were unable to recover it. 00:26:49.132 [2024-07-15 18:52:05.582824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.132 [2024-07-15 18:52:05.582885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.132 [2024-07-15 18:52:05.582899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.132 [2024-07-15 18:52:05.582905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.132 [2024-07-15 18:52:05.582911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.132 [2024-07-15 18:52:05.582925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.132 qpair failed and we were unable to recover it. 00:26:49.132 [2024-07-15 18:52:05.592879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.132 [2024-07-15 18:52:05.592940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.132 [2024-07-15 18:52:05.592954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.132 [2024-07-15 18:52:05.592961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.132 [2024-07-15 18:52:05.592966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.132 [2024-07-15 18:52:05.592981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.132 qpair failed and we were unable to recover it. 00:26:49.132 [2024-07-15 18:52:05.602836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.132 [2024-07-15 18:52:05.602898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.132 [2024-07-15 18:52:05.602912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.132 [2024-07-15 18:52:05.602919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.132 [2024-07-15 18:52:05.602925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.132 [2024-07-15 18:52:05.602939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.132 qpair failed and we were unable to recover it. 00:26:49.132 [2024-07-15 18:52:05.612932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.132 [2024-07-15 18:52:05.612997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.132 [2024-07-15 18:52:05.613011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.132 [2024-07-15 18:52:05.613017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.132 [2024-07-15 18:52:05.613023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.132 [2024-07-15 18:52:05.613037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.132 qpair failed and we were unable to recover it. 00:26:49.132 [2024-07-15 18:52:05.622956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.132 [2024-07-15 18:52:05.623019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.132 [2024-07-15 18:52:05.623033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.132 [2024-07-15 18:52:05.623040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.132 [2024-07-15 18:52:05.623045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.132 [2024-07-15 18:52:05.623060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.132 qpair failed and we were unable to recover it. 00:26:49.132 [2024-07-15 18:52:05.632969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.132 [2024-07-15 18:52:05.633030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.132 [2024-07-15 18:52:05.633044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.132 [2024-07-15 18:52:05.633051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.132 [2024-07-15 18:52:05.633056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.132 [2024-07-15 18:52:05.633070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.132 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.643004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.643067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.643082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.643088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.643094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.643107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.653099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.653172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.653186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.653192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.653198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.653212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.663084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.663146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.663163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.663170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.663175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.663190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.673114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.673180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.673195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.673201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.673207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.673222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.683128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.683192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.683206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.683213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.683218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.683241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.693163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.693232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.693247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.693253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.693259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.693273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.703182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.703251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.703267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.703274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.703280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.703298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.713211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.713294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.713309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.713316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.713322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.713337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.723241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.723305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.723320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.723326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.723332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.723346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.733255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.733324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.733339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.733345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.733351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.733365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.743305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.743379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.743394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.743400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.743406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.743420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.753328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.753392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.753409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.753415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.753421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.753436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.763351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.763416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.763430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.763436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.763442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.763457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.773365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.773431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.133 [2024-07-15 18:52:05.773445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.133 [2024-07-15 18:52:05.773451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.133 [2024-07-15 18:52:05.773457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.133 [2024-07-15 18:52:05.773471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.133 qpair failed and we were unable to recover it. 00:26:49.133 [2024-07-15 18:52:05.783406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.133 [2024-07-15 18:52:05.783468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.134 [2024-07-15 18:52:05.783482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.134 [2024-07-15 18:52:05.783489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.134 [2024-07-15 18:52:05.783494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.134 [2024-07-15 18:52:05.783508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.134 qpair failed and we were unable to recover it. 00:26:49.134 [2024-07-15 18:52:05.793442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.134 [2024-07-15 18:52:05.793507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.134 [2024-07-15 18:52:05.793521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.134 [2024-07-15 18:52:05.793528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.134 [2024-07-15 18:52:05.793536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.134 [2024-07-15 18:52:05.793551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.134 qpair failed and we were unable to recover it. 00:26:49.134 [2024-07-15 18:52:05.803461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.134 [2024-07-15 18:52:05.803533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.134 [2024-07-15 18:52:05.803548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.134 [2024-07-15 18:52:05.803555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.134 [2024-07-15 18:52:05.803561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.134 [2024-07-15 18:52:05.803576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.134 qpair failed and we were unable to recover it. 00:26:49.134 [2024-07-15 18:52:05.813506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.134 [2024-07-15 18:52:05.813566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.134 [2024-07-15 18:52:05.813581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.134 [2024-07-15 18:52:05.813587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.134 [2024-07-15 18:52:05.813593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.134 [2024-07-15 18:52:05.813608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.134 qpair failed and we were unable to recover it. 00:26:49.134 [2024-07-15 18:52:05.823520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.134 [2024-07-15 18:52:05.823584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.134 [2024-07-15 18:52:05.823598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.134 [2024-07-15 18:52:05.823604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.134 [2024-07-15 18:52:05.823610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.134 [2024-07-15 18:52:05.823624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.134 qpair failed and we were unable to recover it. 00:26:49.134 [2024-07-15 18:52:05.833547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.134 [2024-07-15 18:52:05.833607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.134 [2024-07-15 18:52:05.833621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.134 [2024-07-15 18:52:05.833628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.134 [2024-07-15 18:52:05.833633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.134 [2024-07-15 18:52:05.833648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.134 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.843572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.843640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.843655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.843661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.843667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.394 [2024-07-15 18:52:05.843680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.394 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.853627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.853704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.853718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.853724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.853730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.394 [2024-07-15 18:52:05.853744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.394 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.863621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.863684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.863698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.863705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.863711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.394 [2024-07-15 18:52:05.863725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.394 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.873674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.873735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.873749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.873756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.873761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.394 [2024-07-15 18:52:05.873775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.394 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.883687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.883751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.883766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.883776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.883782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.394 [2024-07-15 18:52:05.883795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.394 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.893703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.893771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.893785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.893792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.893797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.394 [2024-07-15 18:52:05.893812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.394 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.903758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.903817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.903831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.903838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.903843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.394 [2024-07-15 18:52:05.903858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.394 qpair failed and we were unable to recover it. 00:26:49.394 [2024-07-15 18:52:05.913800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.394 [2024-07-15 18:52:05.913876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.394 [2024-07-15 18:52:05.913891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.394 [2024-07-15 18:52:05.913898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.394 [2024-07-15 18:52:05.913903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.913917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.923811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.923898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.923912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.923918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.923923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.923938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.933857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.933939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.933954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.933961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.933966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.933980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.943841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.943902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.943917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.943923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.943929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.943943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.953920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.953982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.953996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.954003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.954008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.954023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.963929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.963995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.964009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.964016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.964022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.964036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.973976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.974083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.974097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.974107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.974112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.974128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.983998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.984063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.984077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.984084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.984090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.984104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:05.994004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:05.994066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:05.994080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:05.994087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:05.994093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:05.994107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:06.004052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:06.004121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:06.004136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:06.004143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:06.004149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:06.004163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:06.014090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:06.014153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:06.014168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:06.014175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:06.014181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:06.014195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:06.024116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:06.024179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:06.024194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:06.024200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:06.024206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:06.024220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:06.034111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:06.034181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:06.034195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:06.034202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:06.034208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:06.034221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:06.044158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:06.044229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:06.044244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:06.044250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:06.044256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:06.044271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.395 [2024-07-15 18:52:06.054194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.395 [2024-07-15 18:52:06.054257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.395 [2024-07-15 18:52:06.054271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.395 [2024-07-15 18:52:06.054278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.395 [2024-07-15 18:52:06.054284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.395 [2024-07-15 18:52:06.054298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.395 qpair failed and we were unable to recover it. 00:26:49.396 [2024-07-15 18:52:06.064236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.396 [2024-07-15 18:52:06.064294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.396 [2024-07-15 18:52:06.064311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.396 [2024-07-15 18:52:06.064318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.396 [2024-07-15 18:52:06.064324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.396 [2024-07-15 18:52:06.064338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.396 qpair failed and we were unable to recover it. 00:26:49.396 [2024-07-15 18:52:06.074192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.396 [2024-07-15 18:52:06.074254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.396 [2024-07-15 18:52:06.074269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.396 [2024-07-15 18:52:06.074277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.396 [2024-07-15 18:52:06.074283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.396 [2024-07-15 18:52:06.074297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.396 qpair failed and we were unable to recover it. 00:26:49.396 [2024-07-15 18:52:06.084263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.396 [2024-07-15 18:52:06.084332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.396 [2024-07-15 18:52:06.084348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.396 [2024-07-15 18:52:06.084355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.396 [2024-07-15 18:52:06.084361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.396 [2024-07-15 18:52:06.084377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.396 qpair failed and we were unable to recover it. 00:26:49.396 [2024-07-15 18:52:06.094245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.396 [2024-07-15 18:52:06.094310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.396 [2024-07-15 18:52:06.094325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.396 [2024-07-15 18:52:06.094331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.396 [2024-07-15 18:52:06.094337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.396 [2024-07-15 18:52:06.094354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.396 qpair failed and we were unable to recover it. 00:26:49.656 [2024-07-15 18:52:06.104280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.656 [2024-07-15 18:52:06.104391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.656 [2024-07-15 18:52:06.104407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.656 [2024-07-15 18:52:06.104414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.656 [2024-07-15 18:52:06.104420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.656 [2024-07-15 18:52:06.104438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.656 qpair failed and we were unable to recover it. 00:26:49.656 [2024-07-15 18:52:06.114328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.656 [2024-07-15 18:52:06.114444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.656 [2024-07-15 18:52:06.114459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.656 [2024-07-15 18:52:06.114465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.656 [2024-07-15 18:52:06.114471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.656 [2024-07-15 18:52:06.114486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.656 qpair failed and we were unable to recover it. 00:26:49.656 [2024-07-15 18:52:06.124402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.656 [2024-07-15 18:52:06.124469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.656 [2024-07-15 18:52:06.124485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.656 [2024-07-15 18:52:06.124491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.656 [2024-07-15 18:52:06.124497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.656 [2024-07-15 18:52:06.124511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.656 qpair failed and we were unable to recover it. 00:26:49.656 [2024-07-15 18:52:06.134362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.656 [2024-07-15 18:52:06.134428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.656 [2024-07-15 18:52:06.134442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.656 [2024-07-15 18:52:06.134448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.134454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.134468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.144380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.144442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.144457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.144463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.144469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.144483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.154493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.154562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.154579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.154586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.154591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.154605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.164450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.164512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.164526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.164533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.164538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.164552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.174480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.174546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.174560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.174566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.174572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.174587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.184563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.184632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.184646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.184652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.184658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.184672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.194648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.194721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.194735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.194741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.194751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.194765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.204591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.204656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.204671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.204677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.204683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.204696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.214643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.214719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.214733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.214739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.214745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.214759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.224698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.224804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.224818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.224825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.224831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.224845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.234692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.234757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.234772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.234779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.234785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.234799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.244768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.244837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.244851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.244858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.244864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.244878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.254707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.254772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.254786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.254793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.254799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.254813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.264798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.264859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.264874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.264880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.264886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.657 [2024-07-15 18:52:06.264900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.657 qpair failed and we were unable to recover it. 00:26:49.657 [2024-07-15 18:52:06.274827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.657 [2024-07-15 18:52:06.274888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.657 [2024-07-15 18:52:06.274903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.657 [2024-07-15 18:52:06.274909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.657 [2024-07-15 18:52:06.274915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.274929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.284864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.284929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.284943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.284950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.284959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.284973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.294823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.294892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.294907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.294913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.294919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.294933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.304976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.305035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.305051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.305057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.305063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.305077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.314941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.315003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.315018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.315024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.315030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.315044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.324927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.324990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.325005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.325011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.325017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.325032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.334989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.335061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.335075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.335082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.335087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.335102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.345010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.345077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.345092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.345098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.345104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.345119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.658 [2024-07-15 18:52:06.355124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.658 [2024-07-15 18:52:06.355189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.658 [2024-07-15 18:52:06.355204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.658 [2024-07-15 18:52:06.355210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.658 [2024-07-15 18:52:06.355216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.658 [2024-07-15 18:52:06.355236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.658 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.365084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.365148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.365162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.365169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.365175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.365189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.375108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.375175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.375189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.375199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.375204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.375219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.385173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.385237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.385251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.385258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.385264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.385278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.395118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.395183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.395198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.395204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.395210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.395229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.405260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.405327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.405341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.405348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.405354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.405368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.415190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.415256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.415270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.415276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.415282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.415297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.425261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.425349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.425363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.425370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.425376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.425390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.435254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.435318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.435332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.435339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.435345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.435359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.445333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.445398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.445412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.445418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.919 [2024-07-15 18:52:06.445424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.919 [2024-07-15 18:52:06.445438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.919 qpair failed and we were unable to recover it. 00:26:49.919 [2024-07-15 18:52:06.455306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.919 [2024-07-15 18:52:06.455390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.919 [2024-07-15 18:52:06.455404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.919 [2024-07-15 18:52:06.455411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.455416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.455432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.465332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.465398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.465415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.465422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.465428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.465442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.475434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.475493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.475507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.475514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.475520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.475534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.485411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.485494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.485508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.485514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.485520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.485533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.495529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.495593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.495607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.495613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.495619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.495633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.505613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.505676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.505691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.505697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.505703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.505721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.515585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.515646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.515661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.515667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.515673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.515688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.525583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.525665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.525679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.525686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.525692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.525706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.535533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.535608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.535623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.535630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.535635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.535650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.545685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.545750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.545765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.545772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.545778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.545793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.555634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.555696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.555714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.555720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.555726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.920 [2024-07-15 18:52:06.555741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.920 qpair failed and we were unable to recover it. 00:26:49.920 [2024-07-15 18:52:06.565680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.920 [2024-07-15 18:52:06.565742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.920 [2024-07-15 18:52:06.565756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.920 [2024-07-15 18:52:06.565763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.920 [2024-07-15 18:52:06.565769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.921 [2024-07-15 18:52:06.565783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.921 qpair failed and we were unable to recover it. 00:26:49.921 [2024-07-15 18:52:06.575728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.921 [2024-07-15 18:52:06.575797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.921 [2024-07-15 18:52:06.575811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.921 [2024-07-15 18:52:06.575818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.921 [2024-07-15 18:52:06.575823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.921 [2024-07-15 18:52:06.575838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.921 qpair failed and we were unable to recover it. 00:26:49.921 [2024-07-15 18:52:06.585765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.921 [2024-07-15 18:52:06.585872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.921 [2024-07-15 18:52:06.585891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.921 [2024-07-15 18:52:06.585898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.921 [2024-07-15 18:52:06.585903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.921 [2024-07-15 18:52:06.585918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.921 qpair failed and we were unable to recover it. 00:26:49.921 [2024-07-15 18:52:06.595769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.921 [2024-07-15 18:52:06.595838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.921 [2024-07-15 18:52:06.595852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.921 [2024-07-15 18:52:06.595858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.921 [2024-07-15 18:52:06.595864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.921 [2024-07-15 18:52:06.595881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.921 qpair failed and we were unable to recover it. 00:26:49.921 [2024-07-15 18:52:06.605857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.921 [2024-07-15 18:52:06.605931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.921 [2024-07-15 18:52:06.605945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.921 [2024-07-15 18:52:06.605952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.921 [2024-07-15 18:52:06.605957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.921 [2024-07-15 18:52:06.605972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.921 qpair failed and we were unable to recover it. 00:26:49.921 [2024-07-15 18:52:06.615832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.921 [2024-07-15 18:52:06.615894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.921 [2024-07-15 18:52:06.615908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.921 [2024-07-15 18:52:06.615914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.921 [2024-07-15 18:52:06.615920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:49.921 [2024-07-15 18:52:06.615935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.921 qpair failed and we were unable to recover it. 00:26:50.180 [2024-07-15 18:52:06.625796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.180 [2024-07-15 18:52:06.625859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.180 [2024-07-15 18:52:06.625874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.180 [2024-07-15 18:52:06.625881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.180 [2024-07-15 18:52:06.625887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.180 [2024-07-15 18:52:06.625902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.180 qpair failed and we were unable to recover it. 00:26:50.180 [2024-07-15 18:52:06.635827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.180 [2024-07-15 18:52:06.635904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.180 [2024-07-15 18:52:06.635918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.180 [2024-07-15 18:52:06.635925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.180 [2024-07-15 18:52:06.635931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.180 [2024-07-15 18:52:06.635945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.180 qpair failed and we were unable to recover it. 00:26:50.180 [2024-07-15 18:52:06.645924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.180 [2024-07-15 18:52:06.645989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.180 [2024-07-15 18:52:06.646003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.180 [2024-07-15 18:52:06.646010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.180 [2024-07-15 18:52:06.646016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.180 [2024-07-15 18:52:06.646029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.180 qpair failed and we were unable to recover it. 00:26:50.180 [2024-07-15 18:52:06.655997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.656103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.656144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.656151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.656157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.656171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.666034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.666147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.666162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.666168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.666174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.666189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.676022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.676127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.676142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.676149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.676155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.676171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.686046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.686108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.686122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.686128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.686137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.686151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.696085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.696159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.696173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.696180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.696185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.696199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.706061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.706124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.706138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.706145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.706151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.706165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.716179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.716244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.716259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.716265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.716271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.716286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.726159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.726231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.726246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.726252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.726258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.726272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.736190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.736266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.736280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.736287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.736293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.736307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.746188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.746256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.746271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.181 [2024-07-15 18:52:06.746277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.181 [2024-07-15 18:52:06.746283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.181 [2024-07-15 18:52:06.746297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.181 qpair failed and we were unable to recover it. 00:26:50.181 [2024-07-15 18:52:06.756233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.181 [2024-07-15 18:52:06.756298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.181 [2024-07-15 18:52:06.756313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.756319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.756324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.756338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.766306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.766388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.766402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.766409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.766414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.766428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.776292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.776358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.776372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.776385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.776391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.776405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.786309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.786366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.786381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.786387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.786392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.786407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.796340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.796402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.796417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.796423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.796429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.796443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.806319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.806381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.806396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.806403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.806408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.806423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.816425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.816486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.816501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.816507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.816513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.816527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.826488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.826547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.826562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.826569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.826575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.826589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.836488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.836566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.836581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.836588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.836593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.836608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.846501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.846566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.182 [2024-07-15 18:52:06.846580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.182 [2024-07-15 18:52:06.846586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.182 [2024-07-15 18:52:06.846592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.182 [2024-07-15 18:52:06.846606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.182 qpair failed and we were unable to recover it. 00:26:50.182 [2024-07-15 18:52:06.856510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.182 [2024-07-15 18:52:06.856575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.183 [2024-07-15 18:52:06.856589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.183 [2024-07-15 18:52:06.856595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.183 [2024-07-15 18:52:06.856601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.183 [2024-07-15 18:52:06.856615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.183 qpair failed and we were unable to recover it. 00:26:50.183 [2024-07-15 18:52:06.866578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.183 [2024-07-15 18:52:06.866640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.183 [2024-07-15 18:52:06.866658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.183 [2024-07-15 18:52:06.866665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.183 [2024-07-15 18:52:06.866670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.183 [2024-07-15 18:52:06.866685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.183 qpair failed and we were unable to recover it. 00:26:50.183 [2024-07-15 18:52:06.876592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.183 [2024-07-15 18:52:06.876674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.183 [2024-07-15 18:52:06.876688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.183 [2024-07-15 18:52:06.876694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.183 [2024-07-15 18:52:06.876700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.183 [2024-07-15 18:52:06.876714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.183 qpair failed and we were unable to recover it. 00:26:50.442 [2024-07-15 18:52:06.886609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.442 [2024-07-15 18:52:06.886674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.442 [2024-07-15 18:52:06.886689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.442 [2024-07-15 18:52:06.886696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.442 [2024-07-15 18:52:06.886702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.442 [2024-07-15 18:52:06.886716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.442 qpair failed and we were unable to recover it. 00:26:50.442 [2024-07-15 18:52:06.896631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.442 [2024-07-15 18:52:06.896719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.442 [2024-07-15 18:52:06.896733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.442 [2024-07-15 18:52:06.896740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.442 [2024-07-15 18:52:06.896746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.442 [2024-07-15 18:52:06.896759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.442 qpair failed and we were unable to recover it. 00:26:50.442 [2024-07-15 18:52:06.906681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.442 [2024-07-15 18:52:06.906739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.442 [2024-07-15 18:52:06.906754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.442 [2024-07-15 18:52:06.906760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.442 [2024-07-15 18:52:06.906765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.442 [2024-07-15 18:52:06.906783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.442 qpair failed and we were unable to recover it. 00:26:50.442 [2024-07-15 18:52:06.916745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.442 [2024-07-15 18:52:06.916856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.442 [2024-07-15 18:52:06.916872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.442 [2024-07-15 18:52:06.916878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.442 [2024-07-15 18:52:06.916884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.442 [2024-07-15 18:52:06.916899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.442 qpair failed and we were unable to recover it. 00:26:50.442 [2024-07-15 18:52:06.926731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.442 [2024-07-15 18:52:06.926806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.442 [2024-07-15 18:52:06.926820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.442 [2024-07-15 18:52:06.926827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.442 [2024-07-15 18:52:06.926833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.442 [2024-07-15 18:52:06.926847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.442 qpair failed and we were unable to recover it. 00:26:50.442 [2024-07-15 18:52:06.936748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.442 [2024-07-15 18:52:06.936819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.442 [2024-07-15 18:52:06.936833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:06.936839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:06.936845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:06.936859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:06.946777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:06.946841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:06.946855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:06.946861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:06.946867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:06.946881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:06.956752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:06.956817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:06.956835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:06.956841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:06.956847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:06.956861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:06.966890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:06.966955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:06.966969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:06.966975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:06.966981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:06.966995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:06.976876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:06.976938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:06.976952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:06.976958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:06.976964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:06.976978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:06.986838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:06.986899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:06.986913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:06.986919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:06.986925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:06.986939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:06.996956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:06.997031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:06.997045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:06.997051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:06.997057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:06.997074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:07.006958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:07.007025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:07.007040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:07.007046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:07.007052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:07.007066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:07.017033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:07.017098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:07.017112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:07.017119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:07.017125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:07.017139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:07.027062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:07.027123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:07.027137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:07.027143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:07.027149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:07.027164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:07.037042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:07.037101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:07.037115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:07.037122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:07.037127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.443 [2024-07-15 18:52:07.037142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.443 qpair failed and we were unable to recover it. 00:26:50.443 [2024-07-15 18:52:07.047072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.443 [2024-07-15 18:52:07.047136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.443 [2024-07-15 18:52:07.047153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.443 [2024-07-15 18:52:07.047160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.443 [2024-07-15 18:52:07.047165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.047180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.057117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.057195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.057209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.057216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.057222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.057241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.067116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.067174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.067188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.067194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.067200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.067214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.077181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.077246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.077260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.077266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.077272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.077286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.087175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.087243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.087257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.087264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.087273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.087287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.097243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.097352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.097368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.097374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.097380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.097394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.107281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.107346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.107362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.107369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.107374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.107389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.117284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.117359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.117373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.117380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.117385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.117400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.127248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.127316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.127330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.127336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.127342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.127356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.137383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.137452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.137466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.137473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.137478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.137492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.444 [2024-07-15 18:52:07.147350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.444 [2024-07-15 18:52:07.147412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.444 [2024-07-15 18:52:07.147426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.444 [2024-07-15 18:52:07.147433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.444 [2024-07-15 18:52:07.147438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.444 [2024-07-15 18:52:07.147452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.444 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.157403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.157469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.157484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.157490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.157496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.157511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.167432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.167497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.167511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.167518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.167524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.167538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.177452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.177517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.177532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.177541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.177547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.177561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.187423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.187487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.187503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.187509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.187515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.187530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.197546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.197609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.197623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.197630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.197636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.197650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.207558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.207618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.207633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.207639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.207645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.207659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.217554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.217612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.217627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.217633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.217639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.217652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.227609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.227670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.227684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.227690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.227696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.227711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.237642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.237704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.237720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.237727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.237733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.237748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.247708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.247786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.247800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.247807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.247812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.247827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.257730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.257798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.257813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.257819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.257825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.257839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.267768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.267848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.267862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.267872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.267877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.267891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.277757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.277820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.277835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.277841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.277847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.277861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.287821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.287882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.287896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.287903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.287909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.287924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.297824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.297886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.297900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.297906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.297912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.297926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.307846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.307906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.704 [2024-07-15 18:52:07.307920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.704 [2024-07-15 18:52:07.307927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.704 [2024-07-15 18:52:07.307933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.704 [2024-07-15 18:52:07.307946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.704 qpair failed and we were unable to recover it. 00:26:50.704 [2024-07-15 18:52:07.317895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.704 [2024-07-15 18:52:07.317964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.317978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.317985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.317991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.318005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.327936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.328028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.328042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.328048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.328054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.328068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.337923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.337991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.338006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.338012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.338017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.338031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.347958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.348021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.348035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.348041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.348047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.348061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.357980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.358040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.358058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.358065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.358070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.358084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.368059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.368122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.368136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.368143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.368149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.368163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.378075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.378141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.378155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.378162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.378167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.378181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.388135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.388199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.388213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.388220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.388229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.388244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.398090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.398156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.398170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.398177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.398182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.398199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.705 [2024-07-15 18:52:07.408147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.705 [2024-07-15 18:52:07.408213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.705 [2024-07-15 18:52:07.408231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.705 [2024-07-15 18:52:07.408238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.705 [2024-07-15 18:52:07.408243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.705 [2024-07-15 18:52:07.408257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.705 qpair failed and we were unable to recover it. 00:26:50.964 [2024-07-15 18:52:07.418246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.964 [2024-07-15 18:52:07.418316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.964 [2024-07-15 18:52:07.418330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.964 [2024-07-15 18:52:07.418337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.964 [2024-07-15 18:52:07.418343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.964 [2024-07-15 18:52:07.418357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.964 qpair failed and we were unable to recover it. 00:26:50.964 [2024-07-15 18:52:07.428231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.964 [2024-07-15 18:52:07.428299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.964 [2024-07-15 18:52:07.428315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.964 [2024-07-15 18:52:07.428321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.964 [2024-07-15 18:52:07.428327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.964 [2024-07-15 18:52:07.428342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.964 qpair failed and we were unable to recover it. 00:26:50.964 [2024-07-15 18:52:07.438238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.964 [2024-07-15 18:52:07.438297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.964 [2024-07-15 18:52:07.438312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.964 [2024-07-15 18:52:07.438318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.964 [2024-07-15 18:52:07.438324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.964 [2024-07-15 18:52:07.438338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.964 qpair failed and we were unable to recover it. 00:26:50.964 [2024-07-15 18:52:07.448271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.964 [2024-07-15 18:52:07.448362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.964 [2024-07-15 18:52:07.448380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.964 [2024-07-15 18:52:07.448386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.964 [2024-07-15 18:52:07.448392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.964 [2024-07-15 18:52:07.448407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.964 qpair failed and we were unable to recover it. 00:26:50.964 [2024-07-15 18:52:07.458305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.964 [2024-07-15 18:52:07.458372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.964 [2024-07-15 18:52:07.458386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.964 [2024-07-15 18:52:07.458392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.964 [2024-07-15 18:52:07.458398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.964 [2024-07-15 18:52:07.458412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.964 qpair failed and we were unable to recover it. 00:26:50.964 [2024-07-15 18:52:07.468355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.964 [2024-07-15 18:52:07.468418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.964 [2024-07-15 18:52:07.468432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.468439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.468444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.468459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.478353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.478417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.478431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.478438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.478444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.478458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.488396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.488464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.488479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.488485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.488498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.488513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.498405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.498472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.498486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.498493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.498498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.498512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.508459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.508525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.508539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.508546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.508552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.508566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.518519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.518585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.518600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.518606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.518612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.518626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.528537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.528603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.528617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.528623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.528629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.528643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.538510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.538578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.538592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.538599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.538604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.538619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.548588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.548652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.548666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.548673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.548678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.548692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.558567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.558630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.558645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.558651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.558657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.558671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.568631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.568691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.568706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.568712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.568718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.568732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.578628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.578693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.578708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.578717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.578723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.578736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.588635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.588696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.588710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.588716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.588722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.588736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.598638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.598704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.598719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.598725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.598731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.598745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.965 qpair failed and we were unable to recover it. 00:26:50.965 [2024-07-15 18:52:07.608722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.965 [2024-07-15 18:52:07.608788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.965 [2024-07-15 18:52:07.608802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.965 [2024-07-15 18:52:07.608809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.965 [2024-07-15 18:52:07.608815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.965 [2024-07-15 18:52:07.608829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.966 qpair failed and we were unable to recover it. 00:26:50.966 [2024-07-15 18:52:07.618694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.966 [2024-07-15 18:52:07.618777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.966 [2024-07-15 18:52:07.618792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.966 [2024-07-15 18:52:07.618798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.966 [2024-07-15 18:52:07.618804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.966 [2024-07-15 18:52:07.618819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.966 qpair failed and we were unable to recover it. 00:26:50.966 [2024-07-15 18:52:07.628807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.966 [2024-07-15 18:52:07.628869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.966 [2024-07-15 18:52:07.628883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.966 [2024-07-15 18:52:07.628890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.966 [2024-07-15 18:52:07.628896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.966 [2024-07-15 18:52:07.628911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.966 qpair failed and we were unable to recover it. 00:26:50.966 [2024-07-15 18:52:07.638757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.966 [2024-07-15 18:52:07.638818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.966 [2024-07-15 18:52:07.638832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.966 [2024-07-15 18:52:07.638839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.966 [2024-07-15 18:52:07.638845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.966 [2024-07-15 18:52:07.638859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.966 qpair failed and we were unable to recover it. 00:26:50.966 [2024-07-15 18:52:07.648774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.966 [2024-07-15 18:52:07.648836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.966 [2024-07-15 18:52:07.648850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.966 [2024-07-15 18:52:07.648856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.966 [2024-07-15 18:52:07.648862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.966 [2024-07-15 18:52:07.648876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.966 qpair failed and we were unable to recover it. 00:26:50.966 [2024-07-15 18:52:07.658848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.966 [2024-07-15 18:52:07.658954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.966 [2024-07-15 18:52:07.658969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.966 [2024-07-15 18:52:07.658976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.966 [2024-07-15 18:52:07.658981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.966 [2024-07-15 18:52:07.658997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.966 qpair failed and we were unable to recover it. 00:26:50.966 [2024-07-15 18:52:07.668915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.966 [2024-07-15 18:52:07.668990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.966 [2024-07-15 18:52:07.669005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.966 [2024-07-15 18:52:07.669014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.966 [2024-07-15 18:52:07.669020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:50.966 [2024-07-15 18:52:07.669034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.966 qpair failed and we were unable to recover it. 00:26:51.226 [2024-07-15 18:52:07.678933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.226 [2024-07-15 18:52:07.679001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.226 [2024-07-15 18:52:07.679015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.226 [2024-07-15 18:52:07.679022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.226 [2024-07-15 18:52:07.679028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.226 [2024-07-15 18:52:07.679041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.226 qpair failed and we were unable to recover it. 00:26:51.226 [2024-07-15 18:52:07.688989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.226 [2024-07-15 18:52:07.689055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.226 [2024-07-15 18:52:07.689070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.226 [2024-07-15 18:52:07.689076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.226 [2024-07-15 18:52:07.689082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.226 [2024-07-15 18:52:07.689096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.226 qpair failed and we were unable to recover it. 00:26:51.226 [2024-07-15 18:52:07.699007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.226 [2024-07-15 18:52:07.699080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.226 [2024-07-15 18:52:07.699094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.226 [2024-07-15 18:52:07.699101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.226 [2024-07-15 18:52:07.699106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.226 [2024-07-15 18:52:07.699120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.226 qpair failed and we were unable to recover it. 00:26:51.226 [2024-07-15 18:52:07.709028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.226 [2024-07-15 18:52:07.709089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.226 [2024-07-15 18:52:07.709104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.226 [2024-07-15 18:52:07.709110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.226 [2024-07-15 18:52:07.709116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.226 [2024-07-15 18:52:07.709130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.226 qpair failed and we were unable to recover it. 00:26:51.226 [2024-07-15 18:52:07.718989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.226 [2024-07-15 18:52:07.719052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.226 [2024-07-15 18:52:07.719067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.226 [2024-07-15 18:52:07.719073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.226 [2024-07-15 18:52:07.719079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.226 [2024-07-15 18:52:07.719092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.226 qpair failed and we were unable to recover it. 00:26:51.226 [2024-07-15 18:52:07.729074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.226 [2024-07-15 18:52:07.729141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.226 [2024-07-15 18:52:07.729155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.226 [2024-07-15 18:52:07.729162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.226 [2024-07-15 18:52:07.729167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.226 [2024-07-15 18:52:07.729181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.226 qpair failed and we were unable to recover it. 00:26:51.226 [2024-07-15 18:52:07.739119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.739190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.739205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.739212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.739217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.739236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.749143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.749207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.749221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.749232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.749238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.749253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.759175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.759248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.759267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.759273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.759279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.759293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.769187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.769260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.769275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.769281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.769287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.769302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.779163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.779230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.779245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.779252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.779258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.779273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.789249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.789329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.789343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.789350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.789355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.789370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.799294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.799361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.799377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.799383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.799389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.799406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.809328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.809393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.809408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.809414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.809419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.809434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.819363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.819473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.819488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.819496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.227 [2024-07-15 18:52:07.819503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.227 [2024-07-15 18:52:07.819518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.227 qpair failed and we were unable to recover it. 00:26:51.227 [2024-07-15 18:52:07.829391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.227 [2024-07-15 18:52:07.829457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.227 [2024-07-15 18:52:07.829471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.227 [2024-07-15 18:52:07.829477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.829483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.829497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.839359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.839425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.839440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.839446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.839452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.839466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.849392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.849454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.849472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.849479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.849484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.849499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.859457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.859520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.859534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.859541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.859546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.859561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.869475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.869537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.869552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.869558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.869564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.869578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.879473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.879573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.879588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.879594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.879600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.879614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.889575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.889665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.889680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.889686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.889695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.889710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.899588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.899657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.899672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.899678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.899684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.899699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.909608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.909674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.909688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.909695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.909700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.909715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.228 [2024-07-15 18:52:07.919649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.228 [2024-07-15 18:52:07.919713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.228 [2024-07-15 18:52:07.919727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.228 [2024-07-15 18:52:07.919734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.228 [2024-07-15 18:52:07.919740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.228 [2024-07-15 18:52:07.919754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.228 qpair failed and we were unable to recover it. 00:26:51.229 [2024-07-15 18:52:07.929671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.229 [2024-07-15 18:52:07.929733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.229 [2024-07-15 18:52:07.929748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.229 [2024-07-15 18:52:07.929755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.229 [2024-07-15 18:52:07.929761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.229 [2024-07-15 18:52:07.929774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.229 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:07.939705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:07.939775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:07.939790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:07.939797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:07.939803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:07.939817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:07.949736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:07.949814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:07.949828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:07.949835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:07.949840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:07.949854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:07.959747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:07.959812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:07.959826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:07.959833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:07.959839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:07.959853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:07.969785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:07.969848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:07.969862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:07.969869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:07.969875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:07.969889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:07.979816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:07.979879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:07.979893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:07.979900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:07.979909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:07.979922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:07.989844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:07.989911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:07.989925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:07.989932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:07.989938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:07.989951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:07.999858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:07.999926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:07.999941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:07.999947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:07.999953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:07.999967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:08.009900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:08.009961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:08.009976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:08.009982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.488 [2024-07-15 18:52:08.009988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.488 [2024-07-15 18:52:08.010002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.488 qpair failed and we were unable to recover it. 00:26:51.488 [2024-07-15 18:52:08.019932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.488 [2024-07-15 18:52:08.019997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.488 [2024-07-15 18:52:08.020012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.488 [2024-07-15 18:52:08.020018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.020024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.020038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.029958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.030019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.030034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.030041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.030046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.030061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.040018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.040131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.040147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.040153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.040159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.040173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.050016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.050082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.050097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.050103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.050109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.050124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.060058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.060122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.060137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.060143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.060149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.060163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.070044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.070102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.070116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.070126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.070131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.070146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.080126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.080193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.080208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.080215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.080221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.080240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.090143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.090204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.090220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.090229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.090235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.090250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.100164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.100232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.100247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.100254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.100259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.100273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.110189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.110253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.110267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.110274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.110280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.110294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.120259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.120322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.120337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.120343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.120349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.120363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.130273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.130337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.130352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.130359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.130364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.130379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.140302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.140370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.140385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.140391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.140397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.140412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.150342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.150410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.150424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.150430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.150436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.150451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.160343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.160402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.160420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.160426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.160432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.160446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.170418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.170484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.170499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.170505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.170511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.170525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.180410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.180476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.180492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.180498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.180504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.180518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.489 [2024-07-15 18:52:08.190443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.489 [2024-07-15 18:52:08.190508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.489 [2024-07-15 18:52:08.190523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.489 [2024-07-15 18:52:08.190530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.489 [2024-07-15 18:52:08.190536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.489 [2024-07-15 18:52:08.190550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.489 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.200493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.200553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.200568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.200575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.200580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.200598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.210507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.210574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.210588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.210594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.210600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.210614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.220542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.220616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.220630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.220637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.220642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.220656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.230550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.230613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.230627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.230633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.230639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.230653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.240578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.240641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.240655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.240662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.240668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.240682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.250619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.250683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.250704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.250710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.250716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.250730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.260659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.260774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.260789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.260796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.260802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.260817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.270689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.270753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.270768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.270774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.270780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.270795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.280727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.280789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.280803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.280809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.280816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.280830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.290722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.290786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.290800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.290807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.290812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.290830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.300740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.300804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.300818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.300824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.300830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.300844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.310801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.310867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.310881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.310887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.310893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.310907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.320840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.320901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.320915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.320922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.320928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.320942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.330839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.330902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.330917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.330923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.330929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.330943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.340860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.340929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.340943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.340950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.340955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.340969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.350904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.350968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.350982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.350988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.794 [2024-07-15 18:52:08.350994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.794 [2024-07-15 18:52:08.351008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.794 qpair failed and we were unable to recover it. 00:26:51.794 [2024-07-15 18:52:08.360918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.794 [2024-07-15 18:52:08.361002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.794 [2024-07-15 18:52:08.361016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.794 [2024-07-15 18:52:08.361023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.795 [2024-07-15 18:52:08.361029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.795 [2024-07-15 18:52:08.361042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.795 qpair failed and we were unable to recover it. 00:26:51.795 [2024-07-15 18:52:08.370997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.795 [2024-07-15 18:52:08.371065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.795 [2024-07-15 18:52:08.371079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.795 [2024-07-15 18:52:08.371086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.795 [2024-07-15 18:52:08.371092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.795 [2024-07-15 18:52:08.371106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.795 qpair failed and we were unable to recover it. 00:26:51.795 [2024-07-15 18:52:08.380999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.795 [2024-07-15 18:52:08.381064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.795 [2024-07-15 18:52:08.381078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.795 [2024-07-15 18:52:08.381084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.795 [2024-07-15 18:52:08.381093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.795 [2024-07-15 18:52:08.381108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.795 qpair failed and we were unable to recover it. 00:26:51.795 [2024-07-15 18:52:08.391011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.795 [2024-07-15 18:52:08.391073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.795 [2024-07-15 18:52:08.391087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.795 [2024-07-15 18:52:08.391093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.795 [2024-07-15 18:52:08.391099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff7c0000b90 00:26:51.795 [2024-07-15 18:52:08.391113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.795 qpair failed and we were unable to recover it. 00:26:51.795 [2024-07-15 18:52:08.391236] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:51.795 A controller has encountered a failure and is being reset. 00:26:51.795 [2024-07-15 18:52:08.391333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc000 (9): Bad file descriptor 00:26:51.795 Controller properly reset. 00:26:52.052 Initializing NVMe Controllers 00:26:52.052 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:52.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:52.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:52.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:52.052 Initialization complete. Launching workers. 00:26:52.052 Starting thread on core 1 00:26:52.052 Starting thread on core 2 00:26:52.052 Starting thread on core 3 00:26:52.052 Starting thread on core 0 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:52.052 00:26:52.052 real 0m11.330s 00:26:52.052 user 0m21.299s 00:26:52.052 sys 0m4.233s 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.052 ************************************ 00:26:52.052 END TEST nvmf_target_disconnect_tc2 00:26:52.052 ************************************ 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.052 rmmod nvme_tcp 00:26:52.052 rmmod nvme_fabrics 00:26:52.052 rmmod nvme_keyring 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1249295 ']' 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1249295 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1249295 ']' 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1249295 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1249295 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1249295' 00:26:52.052 killing process with pid 1249295 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1249295 00:26:52.052 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1249295 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.311 18:52:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.844 18:52:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.844 00:26:54.844 real 0m19.429s 00:26:54.844 user 0m48.614s 00:26:54.844 sys 0m8.623s 00:26:54.844 18:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.844 18:52:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 ************************************ 00:26:54.844 END TEST nvmf_target_disconnect 00:26:54.844 ************************************ 00:26:54.844 18:52:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:54.844 18:52:10 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:54.844 18:52:10 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:54.844 18:52:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 18:52:11 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:54.844 00:26:54.844 real 20m52.011s 00:26:54.844 user 45m4.658s 00:26:54.844 sys 6m14.708s 00:26:54.844 18:52:11 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.844 18:52:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 ************************************ 00:26:54.844 END TEST nvmf_tcp 00:26:54.844 ************************************ 00:26:54.844 18:52:11 -- common/autotest_common.sh@1142 -- # return 0 00:26:54.844 18:52:11 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:26:54.844 18:52:11 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:54.844 18:52:11 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:54.844 18:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.844 18:52:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 ************************************ 00:26:54.844 START TEST spdkcli_nvmf_tcp 00:26:54.844 ************************************ 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:54.844 * Looking for test storage... 00:26:54.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1250825 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1250825 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1250825 ']' 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.844 18:52:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 [2024-07-15 18:52:11.216029] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:26:54.844 [2024-07-15 18:52:11.216078] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250825 ] 00:26:54.844 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.844 [2024-07-15 18:52:11.265459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:54.844 [2024-07-15 18:52:11.345338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.844 [2024-07-15 18:52:11.345341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.413 18:52:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:55.413 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:55.413 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:55.413 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:55.413 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:55.413 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:55.413 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:55.413 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:55.413 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:55.413 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:55.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:55.413 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:55.413 ' 00:26:57.944 [2024-07-15 18:52:14.444615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.321 [2024-07-15 18:52:15.620490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:01.226 [2024-07-15 18:52:17.783036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:03.132 [2024-07-15 18:52:19.640831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:04.509 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:04.509 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:04.509 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:04.509 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:04.509 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:04.509 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:04.509 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:04.509 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:04.509 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:04.509 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:04.509 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:04.509 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:04.509 18:52:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:04.509 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.509 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.509 18:52:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:04.509 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.509 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:04.768 18:52:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:04.768 18:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:05.029 18:52:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:05.029 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:05.029 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:05.029 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:05.029 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:05.029 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:05.029 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:05.029 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:05.029 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:05.029 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:05.029 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:05.029 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:05.029 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:05.029 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:05.029 ' 00:27:10.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:10.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:10.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:10.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:10.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:10.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:10.351 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:10.351 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:10.351 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:10.351 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:10.351 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:10.351 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:10.351 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:10.351 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1250825 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1250825 ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1250825 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1250825 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1250825' 00:27:10.351 killing process with pid 1250825 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1250825 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1250825 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1250825 ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1250825 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1250825 ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1250825 00:27:10.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1250825) - No such process 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1250825 is not found' 00:27:10.351 Process with pid 1250825 is not found 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:10.351 00:27:10.351 real 0m15.781s 00:27:10.351 user 0m32.743s 00:27:10.351 sys 0m0.679s 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.351 18:52:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:10.351 ************************************ 00:27:10.351 END TEST spdkcli_nvmf_tcp 00:27:10.351 ************************************ 00:27:10.351 18:52:26 -- common/autotest_common.sh@1142 -- # return 0 00:27:10.351 18:52:26 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:10.351 18:52:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:10.351 18:52:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.351 18:52:26 -- common/autotest_common.sh@10 -- # set +x 00:27:10.351 ************************************ 00:27:10.351 START TEST nvmf_identify_passthru 00:27:10.351 ************************************ 00:27:10.351 18:52:26 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:10.351 * Looking for test storage... 00:27:10.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:10.351 18:52:27 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.351 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:10.351 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.352 18:52:27 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.352 18:52:27 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.352 18:52:27 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.352 18:52:27 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.352 18:52:27 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.352 18:52:27 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.352 18:52:27 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:10.352 18:52:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.352 18:52:27 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.352 18:52:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:10.352 18:52:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.352 18:52:27 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.352 18:52:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:15.624 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:15.625 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:15.625 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:15.625 Found net devices under 0000:86:00.0: cvl_0_0 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:15.625 Found net devices under 0000:86:00.1: cvl_0_1 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:15.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:27:15.625 00:27:15.625 --- 10.0.0.2 ping statistics --- 00:27:15.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.625 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:27:15.625 00:27:15.625 --- 10.0.0.1 ping statistics --- 00:27:15.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.625 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:15.625 18:52:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:15.625 18:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.625 18:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:15.625 18:52:31 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:15.625 18:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:15.625 18:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:15.625 18:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:15.625 18:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:15.625 18:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:15.625 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.817 18:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:19.817 18:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:19.817 18:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:19.817 18:52:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:19.817 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.010 18:52:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:24.010 18:52:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.010 18:52:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.010 18:52:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1257622 00:27:24.010 18:52:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:24.010 18:52:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:24.010 18:52:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1257622 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1257622 ']' 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.010 18:52:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.010 [2024-07-15 18:52:40.205374] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:27:24.010 [2024-07-15 18:52:40.205422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.010 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.010 [2024-07-15 18:52:40.263858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.010 [2024-07-15 18:52:40.345521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.010 [2024-07-15 18:52:40.345558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.011 [2024-07-15 18:52:40.345565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.011 [2024-07-15 18:52:40.345571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.011 [2024-07-15 18:52:40.345576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.011 [2024-07-15 18:52:40.345619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.011 [2024-07-15 18:52:40.345714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.011 [2024-07-15 18:52:40.345798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.011 [2024-07-15 18:52:40.345799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:24.580 18:52:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.580 INFO: Log level set to 20 00:27:24.580 INFO: Requests: 00:27:24.580 { 00:27:24.580 "jsonrpc": "2.0", 00:27:24.580 "method": "nvmf_set_config", 00:27:24.580 "id": 1, 00:27:24.580 "params": { 00:27:24.580 "admin_cmd_passthru": { 00:27:24.580 "identify_ctrlr": true 00:27:24.580 } 00:27:24.580 } 00:27:24.580 } 00:27:24.580 00:27:24.580 INFO: response: 00:27:24.580 { 00:27:24.580 "jsonrpc": "2.0", 00:27:24.580 "id": 1, 00:27:24.580 "result": true 00:27:24.580 } 00:27:24.580 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.580 18:52:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.580 INFO: Setting log level to 20 00:27:24.580 INFO: Setting log level to 20 00:27:24.580 INFO: Log level set to 20 00:27:24.580 INFO: Log level set to 20 00:27:24.580 INFO: Requests: 00:27:24.580 { 00:27:24.580 "jsonrpc": "2.0", 00:27:24.580 "method": "framework_start_init", 00:27:24.580 "id": 1 00:27:24.580 } 00:27:24.580 00:27:24.580 INFO: Requests: 00:27:24.580 { 00:27:24.580 "jsonrpc": "2.0", 00:27:24.580 "method": "framework_start_init", 00:27:24.580 "id": 1 00:27:24.580 } 00:27:24.580 00:27:24.580 [2024-07-15 18:52:41.103071] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:24.580 INFO: response: 00:27:24.580 { 00:27:24.580 "jsonrpc": "2.0", 00:27:24.580 "id": 1, 00:27:24.580 "result": true 00:27:24.580 } 00:27:24.580 00:27:24.580 INFO: response: 00:27:24.580 { 00:27:24.580 "jsonrpc": "2.0", 00:27:24.580 "id": 1, 00:27:24.580 "result": true 00:27:24.580 } 00:27:24.580 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.580 18:52:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.580 INFO: Setting log level to 40 00:27:24.580 INFO: Setting log level to 40 00:27:24.580 INFO: Setting log level to 40 00:27:24.580 [2024-07-15 18:52:41.116521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.580 18:52:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:24.580 18:52:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.580 18:52:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.869 Nvme0n1 00:27:27.869 18:52:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.869 18:52:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:27.869 18:52:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.869 18:52:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.869 18:52:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.869 18:52:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:27.869 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.870 [2024-07-15 18:52:44.020381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.870 [ 00:27:27.870 { 00:27:27.870 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:27.870 "subtype": "Discovery", 00:27:27.870 "listen_addresses": [], 00:27:27.870 "allow_any_host": true, 00:27:27.870 "hosts": [] 00:27:27.870 }, 00:27:27.870 { 00:27:27.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.870 "subtype": "NVMe", 00:27:27.870 "listen_addresses": [ 00:27:27.870 { 00:27:27.870 "trtype": "TCP", 00:27:27.870 "adrfam": "IPv4", 00:27:27.870 "traddr": "10.0.0.2", 00:27:27.870 "trsvcid": "4420" 00:27:27.870 } 00:27:27.870 ], 00:27:27.870 "allow_any_host": true, 00:27:27.870 "hosts": [], 00:27:27.870 "serial_number": "SPDK00000000000001", 00:27:27.870 "model_number": "SPDK bdev Controller", 00:27:27.870 "max_namespaces": 1, 00:27:27.870 "min_cntlid": 1, 00:27:27.870 "max_cntlid": 65519, 00:27:27.870 "namespaces": [ 00:27:27.870 { 00:27:27.870 "nsid": 1, 00:27:27.870 "bdev_name": "Nvme0n1", 00:27:27.870 "name": "Nvme0n1", 00:27:27.870 "nguid": "E0ACF73C8EE042BCA3F1F00264A648AF", 00:27:27.870 "uuid": "e0acf73c-8ee0-42bc-a3f1-f00264a648af" 00:27:27.870 } 00:27:27.870 ] 00:27:27.870 } 00:27:27.870 ] 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:27.870 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:27.870 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:27.870 18:52:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.870 rmmod nvme_tcp 00:27:27.870 rmmod nvme_fabrics 00:27:27.870 rmmod nvme_keyring 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1257622 ']' 00:27:27.870 18:52:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1257622 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1257622 ']' 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1257622 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1257622 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1257622' 00:27:27.870 killing process with pid 1257622 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1257622 00:27:27.870 18:52:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1257622 00:27:29.248 18:52:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.248 18:52:45 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.248 18:52:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.248 18:52:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.248 18:52:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.248 18:52:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.248 18:52:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:29.248 18:52:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.784 18:52:47 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.784 00:27:31.784 real 0m21.064s 00:27:31.784 user 0m29.405s 00:27:31.784 sys 0m4.272s 00:27:31.784 18:52:47 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.784 18:52:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:31.784 ************************************ 00:27:31.784 END TEST nvmf_identify_passthru 00:27:31.784 ************************************ 00:27:31.784 18:52:48 -- common/autotest_common.sh@1142 -- # return 0 00:27:31.784 18:52:48 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:31.784 18:52:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.784 18:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.784 18:52:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.784 ************************************ 00:27:31.784 START TEST nvmf_dif 00:27:31.784 ************************************ 00:27:31.784 18:52:48 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:31.784 * Looking for test storage... 00:27:31.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.784 18:52:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.784 18:52:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:31.784 18:52:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.784 18:52:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.784 18:52:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.784 18:52:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.785 18:52:48 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.785 18:52:48 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.785 18:52:48 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.785 18:52:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.785 18:52:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.785 18:52:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.785 18:52:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:31.785 18:52:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.785 18:52:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:31.785 18:52:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:31.785 18:52:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:31.785 18:52:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:31.785 18:52:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.785 18:52:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.785 18:52:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.785 18:52:48 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.785 18:52:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:37.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:37.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:37.087 Found net devices under 0000:86:00.0: cvl_0_0 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:37.087 Found net devices under 0000:86:00.1: cvl_0_1 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:27:37.087 00:27:37.087 --- 10.0.0.2 ping statistics --- 00:27:37.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.087 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:27:37.087 00:27:37.087 --- 10.0.0.1 ping statistics --- 00:27:37.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.087 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:37.087 18:52:53 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:38.992 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:38.992 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:38.992 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:38.992 18:52:55 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.992 18:52:55 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.992 18:52:55 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.992 18:52:55 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.992 18:52:55 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.992 18:52:55 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.251 18:52:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:39.251 18:52:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:39.251 18:52:55 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:39.251 18:52:55 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1263080 00:27:39.251 18:52:55 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1263080 00:27:39.251 18:52:55 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1263080 ']' 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.251 18:52:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:39.251 [2024-07-15 18:52:55.762098] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:27:39.251 [2024-07-15 18:52:55.762138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.251 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.251 [2024-07-15 18:52:55.817816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.251 [2024-07-15 18:52:55.897777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.251 [2024-07-15 18:52:55.897812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.251 [2024-07-15 18:52:55.897819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.251 [2024-07-15 18:52:55.897826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.251 [2024-07-15 18:52:55.897831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.251 [2024-07-15 18:52:55.897848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:40.187 18:52:56 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:40.187 18:52:56 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.187 18:52:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:40.187 18:52:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:40.187 [2024-07-15 18:52:56.600387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.187 18:52:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.187 18:52:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:40.187 ************************************ 00:27:40.187 START TEST fio_dif_1_default 00:27:40.187 ************************************ 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:40.187 bdev_null0 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.187 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:40.188 [2024-07-15 18:52:56.668666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.188 { 00:27:40.188 "params": { 00:27:40.188 "name": "Nvme$subsystem", 00:27:40.188 "trtype": "$TEST_TRANSPORT", 00:27:40.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.188 "adrfam": "ipv4", 00:27:40.188 "trsvcid": "$NVMF_PORT", 00:27:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.188 "hdgst": ${hdgst:-false}, 00:27:40.188 "ddgst": ${ddgst:-false} 00:27:40.188 }, 00:27:40.188 "method": "bdev_nvme_attach_controller" 00:27:40.188 } 00:27:40.188 EOF 00:27:40.188 )") 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:40.188 "params": { 00:27:40.188 "name": "Nvme0", 00:27:40.188 "trtype": "tcp", 00:27:40.188 "traddr": "10.0.0.2", 00:27:40.188 "adrfam": "ipv4", 00:27:40.188 "trsvcid": "4420", 00:27:40.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.188 "hdgst": false, 00:27:40.188 "ddgst": false 00:27:40.188 }, 00:27:40.188 "method": "bdev_nvme_attach_controller" 00:27:40.188 }' 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:40.188 18:52:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.447 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:40.447 fio-3.35 00:27:40.447 Starting 1 thread 00:27:40.447 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.653 00:27:52.653 filename0: (groupid=0, jobs=1): err= 0: pid=1263499: Mon Jul 15 18:53:07 2024 00:27:52.653 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:27:52.653 slat (nsec): min=5902, max=25717, avg=6253.89, stdev=1270.39 00:27:52.653 clat (usec): min=40779, max=45702, avg=41014.69, stdev=324.26 00:27:52.653 lat (usec): min=40786, max=45728, avg=41020.94, stdev=324.75 00:27:52.653 clat percentiles (usec): 00:27:52.653 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:52.653 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:52.653 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:52.653 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:27:52.653 | 99.99th=[45876] 00:27:52.653 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:27:52.653 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:52.653 lat (msec) : 50=100.00% 00:27:52.653 cpu : usr=94.88%, sys=4.88%, ctx=10, majf=0, minf=216 00:27:52.653 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.653 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.653 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:52.653 00:27:52.653 Run status group 0 (all jobs): 00:27:52.653 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 00:27:52.653 real 0m11.059s 00:27:52.653 user 0m16.449s 00:27:52.653 sys 0m0.774s 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 ************************************ 00:27:52.653 END TEST fio_dif_1_default 00:27:52.653 ************************************ 00:27:52.653 18:53:07 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:52.653 18:53:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:52.653 18:53:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:52.653 18:53:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 ************************************ 00:27:52.653 START TEST fio_dif_1_multi_subsystems 00:27:52.653 ************************************ 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 bdev_null0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 [2024-07-15 18:53:07.791165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 bdev_null1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.653 { 00:27:52.653 "params": { 00:27:52.653 "name": "Nvme$subsystem", 00:27:52.653 "trtype": "$TEST_TRANSPORT", 00:27:52.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.653 "adrfam": "ipv4", 00:27:52.653 "trsvcid": "$NVMF_PORT", 00:27:52.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.653 "hdgst": ${hdgst:-false}, 00:27:52.653 "ddgst": ${ddgst:-false} 00:27:52.653 }, 00:27:52.653 "method": "bdev_nvme_attach_controller" 00:27:52.653 } 00:27:52.653 EOF 00:27:52.653 )") 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.653 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:52.654 { 00:27:52.654 "params": { 00:27:52.654 "name": "Nvme$subsystem", 00:27:52.654 "trtype": "$TEST_TRANSPORT", 00:27:52.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.654 "adrfam": "ipv4", 00:27:52.654 "trsvcid": "$NVMF_PORT", 00:27:52.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.654 "hdgst": ${hdgst:-false}, 00:27:52.654 "ddgst": ${ddgst:-false} 00:27:52.654 }, 00:27:52.654 "method": "bdev_nvme_attach_controller" 00:27:52.654 } 00:27:52.654 EOF 00:27:52.654 )") 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:52.654 "params": { 00:27:52.654 "name": "Nvme0", 00:27:52.654 "trtype": "tcp", 00:27:52.654 "traddr": "10.0.0.2", 00:27:52.654 "adrfam": "ipv4", 00:27:52.654 "trsvcid": "4420", 00:27:52.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.654 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.654 "hdgst": false, 00:27:52.654 "ddgst": false 00:27:52.654 }, 00:27:52.654 "method": "bdev_nvme_attach_controller" 00:27:52.654 },{ 00:27:52.654 "params": { 00:27:52.654 "name": "Nvme1", 00:27:52.654 "trtype": "tcp", 00:27:52.654 "traddr": "10.0.0.2", 00:27:52.654 "adrfam": "ipv4", 00:27:52.654 "trsvcid": "4420", 00:27:52.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.654 "hdgst": false, 00:27:52.654 "ddgst": false 00:27:52.654 }, 00:27:52.654 "method": "bdev_nvme_attach_controller" 00:27:52.654 }' 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:52.654 18:53:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.654 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:52.654 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:52.654 fio-3.35 00:27:52.654 Starting 2 threads 00:27:52.654 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.636 00:28:02.636 filename0: (groupid=0, jobs=1): err= 0: pid=1265429: Mon Jul 15 18:53:18 2024 00:28:02.636 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:28:02.636 slat (nsec): min=3060, max=28821, avg=7648.03, stdev=2588.40 00:28:02.636 clat (usec): min=40753, max=48114, avg=41040.05, stdev=488.91 00:28:02.636 lat (usec): min=40759, max=48124, avg=41047.70, stdev=488.82 00:28:02.636 clat percentiles (usec): 00:28:02.636 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:02.636 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:02.636 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:02.636 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:28:02.636 | 99.99th=[47973] 00:28:02.636 bw ( KiB/s): min= 383, max= 416, per=33.79%, avg=388.75, stdev=11.75, samples=20 00:28:02.636 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:28:02.636 lat (msec) : 50=100.00% 00:28:02.636 cpu : usr=97.67%, sys=2.07%, ctx=13, majf=0, minf=123 00:28:02.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.636 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:02.636 filename1: (groupid=0, jobs=1): err= 0: pid=1265430: Mon Jul 15 18:53:18 2024 00:28:02.636 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10006msec) 00:28:02.636 slat (nsec): min=4276, max=31679, avg=6967.20, stdev=1990.11 00:28:02.636 clat (usec): min=459, max=47373, avg=21044.15, stdev=20388.82 00:28:02.636 lat (usec): min=466, max=47387, avg=21051.12, stdev=20388.14 00:28:02.636 clat percentiles (usec): 00:28:02.636 | 1.00th=[ 578], 5.00th=[ 586], 10.00th=[ 586], 20.00th=[ 594], 00:28:02.636 | 30.00th=[ 603], 40.00th=[ 652], 50.00th=[41157], 60.00th=[41157], 00:28:02.636 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:28:02.636 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:28:02.636 | 99.99th=[47449] 00:28:02.636 bw ( KiB/s): min= 704, max= 768, per=66.28%, avg=761.26, stdev=20.18, samples=19 00:28:02.636 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:28:02.636 lat (usec) : 500=0.21%, 750=48.42%, 1000=1.26% 00:28:02.636 lat (msec) : 50=50.11% 00:28:02.636 cpu : usr=97.99%, sys=1.76%, ctx=15, majf=0, minf=139 00:28:02.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.636 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:02.636 00:28:02.637 Run status group 0 (all jobs): 00:28:02.637 READ: bw=1148KiB/s (1176kB/s), 390KiB/s-760KiB/s (399kB/s-778kB/s), io=11.2MiB (11.8MB), run=10006-10020msec 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 00:28:02.637 real 0m11.280s 00:28:02.637 user 0m26.602s 00:28:02.637 sys 0m0.659s 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 ************************************ 00:28:02.637 END TEST fio_dif_1_multi_subsystems 00:28:02.637 ************************************ 00:28:02.637 18:53:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:02.637 18:53:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:02.637 18:53:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:02.637 18:53:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 ************************************ 00:28:02.637 START TEST fio_dif_rand_params 00:28:02.637 ************************************ 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 bdev_null0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.637 [2024-07-15 18:53:19.130189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.637 { 00:28:02.637 "params": { 00:28:02.637 "name": "Nvme$subsystem", 00:28:02.637 "trtype": "$TEST_TRANSPORT", 00:28:02.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.637 "adrfam": "ipv4", 00:28:02.637 "trsvcid": "$NVMF_PORT", 00:28:02.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.637 "hdgst": ${hdgst:-false}, 00:28:02.637 "ddgst": ${ddgst:-false} 00:28:02.637 }, 00:28:02.637 "method": "bdev_nvme_attach_controller" 00:28:02.637 } 00:28:02.637 EOF 00:28:02.637 )") 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.637 "params": { 00:28:02.637 "name": "Nvme0", 00:28:02.637 "trtype": "tcp", 00:28:02.637 "traddr": "10.0.0.2", 00:28:02.637 "adrfam": "ipv4", 00:28:02.637 "trsvcid": "4420", 00:28:02.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.637 "hdgst": false, 00:28:02.637 "ddgst": false 00:28:02.637 }, 00:28:02.637 "method": "bdev_nvme_attach_controller" 00:28:02.637 }' 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:02.637 18:53:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.896 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:02.896 ... 00:28:02.896 fio-3.35 00:28:02.896 Starting 3 threads 00:28:02.896 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.237 00:28:08.237 filename0: (groupid=0, jobs=1): err= 0: pid=1267391: Mon Jul 15 18:53:24 2024 00:28:08.237 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(182MiB/5003msec) 00:28:08.237 slat (nsec): min=6221, max=30401, avg=9808.03, stdev=2679.22 00:28:08.237 clat (usec): min=3256, max=52618, avg=10279.84, stdev=11347.42 00:28:08.237 lat (usec): min=3263, max=52629, avg=10289.65, stdev=11347.70 00:28:08.237 clat percentiles (usec): 00:28:08.237 | 1.00th=[ 3916], 5.00th=[ 4080], 10.00th=[ 4359], 20.00th=[ 5211], 00:28:08.237 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7767], 00:28:08.237 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10683], 95.00th=[47449], 00:28:08.237 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:28:08.237 | 99.99th=[52691] 00:28:08.237 bw ( KiB/s): min=29184, max=50944, per=36.34%, avg=37432.89, stdev=8076.97, samples=9 00:28:08.237 iops : min= 228, max= 398, avg=292.44, stdev=63.10, samples=9 00:28:08.237 lat (msec) : 4=2.19%, 10=85.32%, 20=4.25%, 50=7.75%, 100=0.48% 00:28:08.237 cpu : usr=93.96%, sys=5.74%, ctx=8, majf=0, minf=97 00:28:08.237 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.237 issued rwts: total=1458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.237 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.237 filename0: (groupid=0, jobs=1): err= 0: pid=1267392: Mon Jul 15 18:53:24 2024 00:28:08.237 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(159MiB/5011msec) 00:28:08.237 slat (nsec): min=6242, max=24961, avg=9783.27, stdev=2604.70 00:28:08.237 clat (usec): min=4004, max=91716, avg=11841.83, stdev=13168.23 00:28:08.237 lat (usec): min=4011, max=91724, avg=11851.61, stdev=13168.32 00:28:08.237 clat percentiles (usec): 00:28:08.237 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5866], 00:28:08.237 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7701], 60.00th=[ 8586], 00:28:08.237 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[45876], 95.00th=[48497], 00:28:08.237 | 99.00th=[51643], 99.50th=[52691], 99.90th=[90702], 99.95th=[91751], 00:28:08.237 | 99.99th=[91751] 00:28:08.237 bw ( KiB/s): min=24832, max=42240, per=31.44%, avg=32384.00, stdev=6639.84, samples=10 00:28:08.237 iops : min= 194, max= 330, avg=253.00, stdev=51.87, samples=10 00:28:08.237 lat (msec) : 10=78.63%, 20=11.36%, 50=7.65%, 100=2.37% 00:28:08.237 cpu : usr=94.89%, sys=4.79%, ctx=6, majf=0, minf=105 00:28:08.237 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.237 issued rwts: total=1268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.237 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.237 filename0: (groupid=0, jobs=1): err= 0: pid=1267393: Mon Jul 15 18:53:24 2024 00:28:08.237 read: IOPS=261, BW=32.7MiB/s (34.2MB/s)(163MiB/5003msec) 00:28:08.237 slat (nsec): min=6251, max=24908, avg=9801.12, stdev=2437.76 00:28:08.237 clat (usec): min=3552, max=90257, avg=11468.95, stdev=12261.96 00:28:08.237 lat (usec): min=3560, max=90270, avg=11478.75, stdev=12262.20 00:28:08.237 clat percentiles (usec): 00:28:08.237 | 1.00th=[ 4228], 5.00th=[ 4621], 10.00th=[ 4883], 20.00th=[ 5735], 00:28:08.237 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 7570], 60.00th=[ 8455], 00:28:08.237 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[12256], 95.00th=[48497], 00:28:08.237 | 99.00th=[50594], 99.50th=[51119], 99.90th=[53216], 99.95th=[90702], 00:28:08.237 | 99.99th=[90702] 00:28:08.237 bw ( KiB/s): min=19494, max=42752, per=32.70%, avg=33682.44, stdev=7273.37, samples=9 00:28:08.237 iops : min= 152, max= 334, avg=263.11, stdev=56.90, samples=9 00:28:08.237 lat (msec) : 4=0.23%, 10=76.28%, 20=14.15%, 50=7.65%, 100=1.68% 00:28:08.237 cpu : usr=94.88%, sys=4.80%, ctx=10, majf=0, minf=63 00:28:08.237 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.237 issued rwts: total=1307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.237 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.237 00:28:08.237 Run status group 0 (all jobs): 00:28:08.237 READ: bw=101MiB/s (105MB/s), 31.6MiB/s-36.4MiB/s (33.2MB/s-38.2MB/s), io=504MiB (529MB), run=5003-5011msec 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 bdev_null0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 [2024-07-15 18:53:25.132686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 bdev_null1 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 bdev_null2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.497 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:08.784 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.784 { 00:28:08.784 "params": { 00:28:08.785 "name": "Nvme$subsystem", 00:28:08.785 "trtype": "$TEST_TRANSPORT", 00:28:08.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.785 "adrfam": "ipv4", 00:28:08.785 "trsvcid": "$NVMF_PORT", 00:28:08.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.785 "hdgst": ${hdgst:-false}, 00:28:08.785 "ddgst": ${ddgst:-false} 00:28:08.785 }, 00:28:08.785 "method": "bdev_nvme_attach_controller" 00:28:08.785 } 00:28:08.785 EOF 00:28:08.785 )") 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.785 { 00:28:08.785 "params": { 00:28:08.785 "name": "Nvme$subsystem", 00:28:08.785 "trtype": "$TEST_TRANSPORT", 00:28:08.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.785 "adrfam": "ipv4", 00:28:08.785 "trsvcid": "$NVMF_PORT", 00:28:08.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.785 "hdgst": ${hdgst:-false}, 00:28:08.785 "ddgst": ${ddgst:-false} 00:28:08.785 }, 00:28:08.785 "method": "bdev_nvme_attach_controller" 00:28:08.785 } 00:28:08.785 EOF 00:28:08.785 )") 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.785 { 00:28:08.785 "params": { 00:28:08.785 "name": "Nvme$subsystem", 00:28:08.785 "trtype": "$TEST_TRANSPORT", 00:28:08.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.785 "adrfam": "ipv4", 00:28:08.785 "trsvcid": "$NVMF_PORT", 00:28:08.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.785 "hdgst": ${hdgst:-false}, 00:28:08.785 "ddgst": ${ddgst:-false} 00:28:08.785 }, 00:28:08.785 "method": "bdev_nvme_attach_controller" 00:28:08.785 } 00:28:08.785 EOF 00:28:08.785 )") 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:08.785 "params": { 00:28:08.785 "name": "Nvme0", 00:28:08.785 "trtype": "tcp", 00:28:08.785 "traddr": "10.0.0.2", 00:28:08.785 "adrfam": "ipv4", 00:28:08.785 "trsvcid": "4420", 00:28:08.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:08.785 "hdgst": false, 00:28:08.785 "ddgst": false 00:28:08.785 }, 00:28:08.785 "method": "bdev_nvme_attach_controller" 00:28:08.785 },{ 00:28:08.785 "params": { 00:28:08.785 "name": "Nvme1", 00:28:08.785 "trtype": "tcp", 00:28:08.785 "traddr": "10.0.0.2", 00:28:08.785 "adrfam": "ipv4", 00:28:08.785 "trsvcid": "4420", 00:28:08.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.785 "hdgst": false, 00:28:08.785 "ddgst": false 00:28:08.785 }, 00:28:08.785 "method": "bdev_nvme_attach_controller" 00:28:08.785 },{ 00:28:08.785 "params": { 00:28:08.785 "name": "Nvme2", 00:28:08.785 "trtype": "tcp", 00:28:08.785 "traddr": "10.0.0.2", 00:28:08.785 "adrfam": "ipv4", 00:28:08.785 "trsvcid": "4420", 00:28:08.785 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:08.785 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:08.785 "hdgst": false, 00:28:08.785 "ddgst": false 00:28:08.785 }, 00:28:08.785 "method": "bdev_nvme_attach_controller" 00:28:08.785 }' 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:08.785 18:53:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.050 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:09.050 ... 00:28:09.050 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:09.050 ... 00:28:09.050 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:09.050 ... 00:28:09.050 fio-3.35 00:28:09.050 Starting 24 threads 00:28:09.050 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.238 00:28:21.238 filename0: (groupid=0, jobs=1): err= 0: pid=1268603: Mon Jul 15 18:53:36 2024 00:28:21.238 read: IOPS=576, BW=2306KiB/s (2361kB/s)(22.6MiB/10020msec) 00:28:21.238 slat (nsec): min=7192, max=50675, avg=20664.69, stdev=6931.92 00:28:21.238 clat (usec): min=5921, max=32609, avg=27583.19, stdev=1616.99 00:28:21.238 lat (usec): min=5931, max=32626, avg=27603.85, stdev=1617.25 00:28:21.238 clat percentiles (usec): 00:28:21.238 | 1.00th=[24511], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:21.238 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.238 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.238 | 99.00th=[28967], 99.50th=[29230], 99.90th=[32637], 99.95th=[32637], 00:28:21.238 | 99.99th=[32637] 00:28:21.238 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2304.00, stdev=41.53, samples=20 00:28:21.238 iops : min= 544, max= 608, avg=576.00, stdev=10.38, samples=20 00:28:21.238 lat (msec) : 10=0.40%, 20=0.43%, 50=99.17% 00:28:21.238 cpu : usr=98.48%, sys=1.13%, ctx=15, majf=0, minf=44 00:28:21.238 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:21.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.238 filename0: (groupid=0, jobs=1): err= 0: pid=1268604: Mon Jul 15 18:53:36 2024 00:28:21.238 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10015msec) 00:28:21.238 slat (nsec): min=7130, max=70885, avg=17280.84, stdev=4942.01 00:28:21.238 clat (usec): min=17474, max=54656, avg=27740.13, stdev=1085.83 00:28:21.238 lat (usec): min=17490, max=54674, avg=27757.41, stdev=1084.98 00:28:21.238 clat percentiles (usec): 00:28:21.238 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:21.238 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.238 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.238 | 99.00th=[28967], 99.50th=[29230], 99.90th=[43779], 99.95th=[43779], 00:28:21.238 | 99.99th=[54789] 00:28:21.238 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2290.05, stdev=39.34, samples=20 00:28:21.238 iops : min= 544, max= 576, avg=572.50, stdev= 9.84, samples=20 00:28:21.238 lat (msec) : 20=0.03%, 50=99.93%, 100=0.03% 00:28:21.238 cpu : usr=98.82%, sys=0.79%, ctx=13, majf=0, minf=60 00:28:21.238 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:21.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.238 filename0: (groupid=0, jobs=1): err= 0: pid=1268605: Mon Jul 15 18:53:36 2024 00:28:21.238 read: IOPS=573, BW=2292KiB/s (2347kB/s)(22.4MiB/10002msec) 00:28:21.238 slat (nsec): min=6016, max=93641, avg=26047.44, stdev=15999.28 00:28:21.238 clat (usec): min=7954, max=51699, avg=27697.96, stdev=2601.07 00:28:21.238 lat (usec): min=7968, max=51716, avg=27724.01, stdev=2600.34 00:28:21.238 clat percentiles (usec): 00:28:21.238 | 1.00th=[16581], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.238 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.238 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.238 | 99.00th=[38011], 99.50th=[43779], 99.90th=[51643], 99.95th=[51643], 00:28:21.238 | 99.99th=[51643] 00:28:21.238 bw ( KiB/s): min= 2176, max= 2432, per=4.14%, avg=2285.68, stdev=60.24, samples=19 00:28:21.238 iops : min= 544, max= 608, avg=571.42, stdev=15.06, samples=19 00:28:21.238 lat (msec) : 10=0.28%, 20=0.84%, 50=98.60%, 100=0.28% 00:28:21.238 cpu : usr=98.92%, sys=0.68%, ctx=12, majf=0, minf=74 00:28:21.238 IO depths : 1=4.4%, 2=9.8%, 4=22.3%, 8=54.9%, 16=8.6%, 32=0.0%, >=64=0.0% 00:28:21.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 issued rwts: total=5732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.238 filename0: (groupid=0, jobs=1): err= 0: pid=1268606: Mon Jul 15 18:53:36 2024 00:28:21.238 read: IOPS=582, BW=2331KiB/s (2387kB/s)(22.8MiB/10012msec) 00:28:21.238 slat (nsec): min=6873, max=96204, avg=32816.40, stdev=21870.26 00:28:21.238 clat (usec): min=10013, max=53340, avg=27147.71, stdev=3368.76 00:28:21.238 lat (usec): min=10021, max=53365, avg=27180.53, stdev=3372.53 00:28:21.238 clat percentiles (usec): 00:28:21.238 | 1.00th=[16319], 5.00th=[20841], 10.00th=[25560], 20.00th=[27132], 00:28:21.238 | 30.00th=[27132], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:21.238 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:28:21.238 | 99.00th=[39060], 99.50th=[42730], 99.90th=[53216], 99.95th=[53216], 00:28:21.238 | 99.99th=[53216] 00:28:21.238 bw ( KiB/s): min= 2096, max= 2736, per=4.20%, avg=2318.32, stdev=119.36, samples=19 00:28:21.238 iops : min= 524, max= 684, avg=579.58, stdev=29.84, samples=19 00:28:21.238 lat (msec) : 20=4.10%, 50=95.63%, 100=0.27% 00:28:21.238 cpu : usr=98.57%, sys=1.04%, ctx=15, majf=0, minf=50 00:28:21.238 IO depths : 1=4.0%, 2=8.6%, 4=20.0%, 8=58.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:21.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.238 issued rwts: total=5834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.238 filename0: (groupid=0, jobs=1): err= 0: pid=1268607: Mon Jul 15 18:53:36 2024 00:28:21.238 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10009msec) 00:28:21.238 slat (nsec): min=7578, max=96392, avg=41320.58, stdev=20001.09 00:28:21.238 clat (usec): min=18243, max=38677, avg=27481.04, stdev=857.13 00:28:21.238 lat (usec): min=18258, max=38691, avg=27522.36, stdev=858.59 00:28:21.239 clat percentiles (usec): 00:28:21.239 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.239 | 30.00th=[27132], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:21.239 | 70.00th=[27657], 80.00th=[27657], 90.00th=[27919], 95.00th=[28181], 00:28:21.239 | 99.00th=[28443], 99.50th=[28705], 99.90th=[38536], 99.95th=[38536], 00:28:21.239 | 99.99th=[38536] 00:28:21.239 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2290.53, stdev=40.36, samples=19 00:28:21.239 iops : min= 544, max= 576, avg=572.63, stdev=10.09, samples=19 00:28:21.239 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.239 cpu : usr=98.74%, sys=0.86%, ctx=13, majf=0, minf=51 00:28:21.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.239 filename0: (groupid=0, jobs=1): err= 0: pid=1268608: Mon Jul 15 18:53:36 2024 00:28:21.239 read: IOPS=581, BW=2328KiB/s (2383kB/s)(22.8MiB/10009msec) 00:28:21.239 slat (usec): min=3, max=1968, avg=33.84, stdev=28.93 00:28:21.239 clat (usec): min=2401, max=41458, avg=27212.15, stdev=3083.87 00:28:21.239 lat (usec): min=2408, max=41475, avg=27245.99, stdev=3086.80 00:28:21.239 clat percentiles (usec): 00:28:21.239 | 1.00th=[ 3916], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.239 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.239 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:21.239 | 99.00th=[28705], 99.50th=[28967], 99.90th=[37487], 99.95th=[37487], 00:28:21.239 | 99.99th=[41681] 00:28:21.239 bw ( KiB/s): min= 2176, max= 2944, per=4.21%, avg=2322.80, stdev=155.80, samples=20 00:28:21.239 iops : min= 544, max= 736, avg=580.80, stdev=38.89, samples=20 00:28:21.239 lat (msec) : 4=1.06%, 10=0.58%, 20=0.27%, 50=98.08% 00:28:21.239 cpu : usr=97.06%, sys=1.70%, ctx=207, majf=0, minf=94 00:28:21.239 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:21.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.239 filename0: (groupid=0, jobs=1): err= 0: pid=1268609: Mon Jul 15 18:53:36 2024 00:28:21.239 read: IOPS=575, BW=2303KiB/s (2359kB/s)(22.5MiB/10003msec) 00:28:21.239 slat (nsec): min=7263, max=47483, avg=14170.76, stdev=6561.83 00:28:21.239 clat (usec): min=7544, max=32349, avg=27669.65, stdev=1408.69 00:28:21.239 lat (usec): min=7558, max=32362, avg=27683.82, stdev=1408.64 00:28:21.239 clat percentiles (usec): 00:28:21.239 | 1.00th=[22676], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:21.239 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.239 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.239 | 99.00th=[29230], 99.50th=[29230], 99.90th=[32375], 99.95th=[32375], 00:28:21.239 | 99.99th=[32375] 00:28:21.239 bw ( KiB/s): min= 2176, max= 2436, per=4.18%, avg=2304.21, stdev=60.82, samples=19 00:28:21.239 iops : min= 544, max= 609, avg=576.05, stdev=15.20, samples=19 00:28:21.239 lat (msec) : 10=0.28%, 20=0.56%, 50=99.17% 00:28:21.239 cpu : usr=98.61%, sys=1.00%, ctx=19, majf=0, minf=71 00:28:21.239 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:21.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.239 filename0: (groupid=0, jobs=1): err= 0: pid=1268610: Mon Jul 15 18:53:36 2024 00:28:21.239 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10001msec) 00:28:21.239 slat (nsec): min=6718, max=93461, avg=38380.43, stdev=19727.45 00:28:21.239 clat (usec): min=13831, max=62801, avg=27553.85, stdev=2032.67 00:28:21.239 lat (usec): min=13839, max=62819, avg=27592.23, stdev=2032.53 00:28:21.239 clat percentiles (usec): 00:28:21.239 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.239 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:21.239 | 70.00th=[27657], 80.00th=[27657], 90.00th=[27919], 95.00th=[28181], 00:28:21.239 | 99.00th=[28443], 99.50th=[28705], 99.90th=[62653], 99.95th=[62653], 00:28:21.239 | 99.99th=[62653] 00:28:21.239 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2283.79, stdev=64.19, samples=19 00:28:21.239 iops : min= 512, max= 576, avg=570.95, stdev=16.05, samples=19 00:28:21.239 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:28:21.239 cpu : usr=98.88%, sys=0.72%, ctx=7, majf=0, minf=55 00:28:21.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.239 filename1: (groupid=0, jobs=1): err= 0: pid=1268611: Mon Jul 15 18:53:36 2024 00:28:21.239 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10015msec) 00:28:21.239 slat (nsec): min=7157, max=42641, avg=15616.41, stdev=4890.98 00:28:21.239 clat (usec): min=17788, max=50217, avg=27761.38, stdev=1022.66 00:28:21.239 lat (usec): min=17798, max=50251, avg=27777.00, stdev=1022.86 00:28:21.239 clat percentiles (usec): 00:28:21.239 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:21.239 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.239 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.239 | 99.00th=[28967], 99.50th=[29230], 99.90th=[43254], 99.95th=[43254], 00:28:21.239 | 99.99th=[50070] 00:28:21.239 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2290.05, stdev=39.34, samples=20 00:28:21.239 iops : min= 544, max= 576, avg=572.50, stdev= 9.84, samples=20 00:28:21.239 lat (msec) : 20=0.03%, 50=99.95%, 100=0.02% 00:28:21.239 cpu : usr=98.79%, sys=0.82%, ctx=7, majf=0, minf=93 00:28:21.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:21.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.239 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.239 filename1: (groupid=0, jobs=1): err= 0: pid=1268613: Mon Jul 15 18:53:36 2024 00:28:21.239 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10008msec) 00:28:21.239 slat (usec): min=7, max=102, avg=41.31, stdev=20.12 00:28:21.239 clat (usec): min=24276, max=57090, avg=27617.30, stdev=1604.12 00:28:21.239 lat (usec): min=24301, max=57115, avg=27658.61, stdev=1602.43 00:28:21.239 clat percentiles (usec): 00:28:21.239 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.239 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.239 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:28:21.239 | 99.00th=[28705], 99.50th=[28705], 99.90th=[56886], 99.95th=[56886], 00:28:21.239 | 99.99th=[56886] 00:28:21.239 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2283.79, stdev=64.19, samples=19 00:28:21.240 iops : min= 512, max= 576, avg=570.95, stdev=16.05, samples=19 00:28:21.240 lat (msec) : 50=99.72%, 100=0.28% 00:28:21.240 cpu : usr=98.71%, sys=0.91%, ctx=17, majf=0, minf=81 00:28:21.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.240 filename1: (groupid=0, jobs=1): err= 0: pid=1268614: Mon Jul 15 18:53:36 2024 00:28:21.240 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10009msec) 00:28:21.240 slat (nsec): min=4196, max=94079, avg=41543.06, stdev=19956.86 00:28:21.240 clat (usec): min=18220, max=38733, avg=27480.55, stdev=861.41 00:28:21.240 lat (usec): min=18236, max=38746, avg=27522.09, stdev=862.63 00:28:21.240 clat percentiles (usec): 00:28:21.240 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.240 | 30.00th=[27132], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:21.240 | 70.00th=[27657], 80.00th=[27657], 90.00th=[27919], 95.00th=[28181], 00:28:21.240 | 99.00th=[28443], 99.50th=[28705], 99.90th=[38536], 99.95th=[38536], 00:28:21.240 | 99.99th=[38536] 00:28:21.240 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2290.53, stdev=40.36, samples=19 00:28:21.240 iops : min= 544, max= 576, avg=572.63, stdev=10.09, samples=19 00:28:21.240 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.240 cpu : usr=98.77%, sys=0.85%, ctx=4, majf=0, minf=70 00:28:21.240 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.240 filename1: (groupid=0, jobs=1): err= 0: pid=1268615: Mon Jul 15 18:53:36 2024 00:28:21.240 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10009msec) 00:28:21.240 slat (nsec): min=6908, max=87261, avg=26045.30, stdev=14754.86 00:28:21.240 clat (usec): min=17561, max=62299, avg=27728.57, stdev=1872.70 00:28:21.240 lat (usec): min=17569, max=62318, avg=27754.61, stdev=1872.72 00:28:21.240 clat percentiles (usec): 00:28:21.240 | 1.00th=[21890], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.240 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.240 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.240 | 99.00th=[33817], 99.50th=[37487], 99.90th=[50594], 99.95th=[50594], 00:28:21.240 | 99.99th=[62129] 00:28:21.240 bw ( KiB/s): min= 2176, max= 2320, per=4.14%, avg=2283.79, stdev=48.25, samples=19 00:28:21.240 iops : min= 544, max= 580, avg=570.95, stdev=12.06, samples=19 00:28:21.240 lat (msec) : 20=0.24%, 50=99.48%, 100=0.28% 00:28:21.240 cpu : usr=98.62%, sys=0.99%, ctx=10, majf=0, minf=63 00:28:21.240 IO depths : 1=5.5%, 2=11.5%, 4=24.4%, 8=51.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:21.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.240 filename1: (groupid=0, jobs=1): err= 0: pid=1268616: Mon Jul 15 18:53:36 2024 00:28:21.240 read: IOPS=576, BW=2306KiB/s (2361kB/s)(22.6MiB/10019msec) 00:28:21.240 slat (nsec): min=7423, max=43175, avg=19593.41, stdev=6346.59 00:28:21.240 clat (usec): min=6087, max=36582, avg=27593.39, stdev=1690.39 00:28:21.240 lat (usec): min=6102, max=36599, avg=27612.98, stdev=1690.70 00:28:21.240 clat percentiles (usec): 00:28:21.240 | 1.00th=[21890], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:21.240 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.240 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.240 | 99.00th=[29230], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:28:21.240 | 99.99th=[36439] 00:28:21.240 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2304.00, stdev=41.53, samples=20 00:28:21.240 iops : min= 544, max= 608, avg=576.00, stdev=10.38, samples=20 00:28:21.240 lat (msec) : 10=0.50%, 20=0.33%, 50=99.17% 00:28:21.240 cpu : usr=98.84%, sys=0.79%, ctx=10, majf=0, minf=54 00:28:21.240 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:21.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.240 filename1: (groupid=0, jobs=1): err= 0: pid=1268617: Mon Jul 15 18:53:36 2024 00:28:21.240 read: IOPS=578, BW=2316KiB/s (2371kB/s)(22.6MiB/10004msec) 00:28:21.240 slat (nsec): min=6893, max=40832, avg=18769.08, stdev=6059.93 00:28:21.240 clat (usec): min=9749, max=54189, avg=27474.27, stdev=2113.18 00:28:21.240 lat (usec): min=9757, max=54211, avg=27493.04, stdev=2114.53 00:28:21.240 clat percentiles (usec): 00:28:21.240 | 1.00th=[17433], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.240 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.240 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.240 | 99.00th=[29230], 99.50th=[38536], 99.90th=[44303], 99.95th=[53740], 00:28:21.240 | 99.99th=[54264] 00:28:21.240 bw ( KiB/s): min= 2176, max= 2704, per=4.20%, avg=2317.47, stdev=98.09, samples=19 00:28:21.240 iops : min= 544, max= 676, avg=579.37, stdev=24.52, samples=19 00:28:21.240 lat (msec) : 10=0.07%, 20=2.05%, 50=97.82%, 100=0.05% 00:28:21.240 cpu : usr=98.78%, sys=0.83%, ctx=15, majf=0, minf=62 00:28:21.240 IO depths : 1=5.6%, 2=11.5%, 4=23.8%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:21.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.240 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.240 filename1: (groupid=0, jobs=1): err= 0: pid=1268618: Mon Jul 15 18:53:36 2024 00:28:21.240 read: IOPS=575, BW=2302KiB/s (2358kB/s)(22.5MiB/10014msec) 00:28:21.240 slat (nsec): min=6877, max=90874, avg=34523.72, stdev=21233.77 00:28:21.240 clat (usec): min=10403, max=53516, avg=27456.66, stdev=2883.60 00:28:21.240 lat (usec): min=10415, max=53542, avg=27491.19, stdev=2884.81 00:28:21.240 clat percentiles (usec): 00:28:21.240 | 1.00th=[16581], 5.00th=[24773], 10.00th=[26870], 20.00th=[27132], 00:28:21.240 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.240 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:28:21.240 | 99.00th=[39584], 99.50th=[42730], 99.90th=[53216], 99.95th=[53740], 00:28:21.240 | 99.99th=[53740] 00:28:21.240 bw ( KiB/s): min= 2176, max= 2480, per=4.16%, avg=2298.95, stdev=73.72, samples=19 00:28:21.240 iops : min= 544, max= 620, avg=574.74, stdev=18.43, samples=19 00:28:21.240 lat (msec) : 20=2.78%, 50=96.95%, 100=0.28% 00:28:21.240 cpu : usr=98.68%, sys=0.93%, ctx=14, majf=0, minf=59 00:28:21.240 IO depths : 1=4.9%, 2=10.4%, 4=22.8%, 8=54.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:21.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 issued rwts: total=5764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.241 filename1: (groupid=0, jobs=1): err= 0: pid=1268619: Mon Jul 15 18:53:36 2024 00:28:21.241 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10009msec) 00:28:21.241 slat (nsec): min=6531, max=77080, avg=21064.71, stdev=6576.74 00:28:21.241 clat (usec): min=8351, max=64568, avg=27730.76, stdev=1951.37 00:28:21.241 lat (usec): min=8378, max=64586, avg=27751.83, stdev=1950.72 00:28:21.241 clat percentiles (usec): 00:28:21.241 | 1.00th=[24773], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:21.241 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.241 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.241 | 99.00th=[29230], 99.50th=[34341], 99.90th=[57934], 99.95th=[64226], 00:28:21.241 | 99.99th=[64750] 00:28:21.241 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2283.79, stdev=64.19, samples=19 00:28:21.241 iops : min= 512, max= 576, avg=570.95, stdev=16.05, samples=19 00:28:21.241 lat (msec) : 10=0.07%, 20=0.24%, 50=99.41%, 100=0.28% 00:28:21.241 cpu : usr=98.82%, sys=0.80%, ctx=5, majf=0, minf=58 00:28:21.241 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 issued rwts: total=5732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.241 filename2: (groupid=0, jobs=1): err= 0: pid=1268620: Mon Jul 15 18:53:36 2024 00:28:21.241 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10007msec) 00:28:21.241 slat (nsec): min=6821, max=88405, avg=18269.25, stdev=12372.45 00:28:21.241 clat (usec): min=6260, max=62364, avg=27902.95, stdev=3543.83 00:28:21.241 lat (usec): min=6273, max=62383, avg=27921.22, stdev=3544.20 00:28:21.241 clat percentiles (usec): 00:28:21.241 | 1.00th=[19006], 5.00th=[22414], 10.00th=[25035], 20.00th=[27395], 00:28:21.241 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.241 | 70.00th=[27919], 80.00th=[28181], 90.00th=[30540], 95.00th=[33817], 00:28:21.241 | 99.00th=[40109], 99.50th=[41157], 99.90th=[55313], 99.95th=[62129], 00:28:21.241 | 99.99th=[62129] 00:28:21.241 bw ( KiB/s): min= 1824, max= 2416, per=4.12%, avg=2273.68, stdev=128.65, samples=19 00:28:21.241 iops : min= 456, max= 604, avg=568.42, stdev=32.16, samples=19 00:28:21.241 lat (msec) : 10=0.11%, 20=1.09%, 50=98.53%, 100=0.28% 00:28:21.241 cpu : usr=98.75%, sys=0.87%, ctx=9, majf=0, minf=75 00:28:21.241 IO depths : 1=2.4%, 2=4.8%, 4=11.7%, 8=68.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 complete : 0=0.0%, 4=91.2%, 8=5.5%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 issued rwts: total=5714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.241 filename2: (groupid=0, jobs=1): err= 0: pid=1268621: Mon Jul 15 18:53:36 2024 00:28:21.241 read: IOPS=573, BW=2296KiB/s (2351kB/s)(22.4MiB/10008msec) 00:28:21.241 slat (nsec): min=8080, max=94046, avg=41570.45, stdev=19710.41 00:28:21.241 clat (usec): min=18252, max=38616, avg=27495.96, stdev=856.44 00:28:21.241 lat (usec): min=18267, max=38639, avg=27537.53, stdev=857.26 00:28:21.241 clat percentiles (usec): 00:28:21.241 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.241 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:21.241 | 70.00th=[27657], 80.00th=[27657], 90.00th=[27919], 95.00th=[28181], 00:28:21.241 | 99.00th=[28443], 99.50th=[28705], 99.90th=[38536], 99.95th=[38536], 00:28:21.241 | 99.99th=[38536] 00:28:21.241 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2290.53, stdev=40.36, samples=19 00:28:21.241 iops : min= 544, max= 576, avg=572.63, stdev=10.09, samples=19 00:28:21.241 lat (msec) : 20=0.28%, 50=99.72% 00:28:21.241 cpu : usr=98.82%, sys=0.81%, ctx=12, majf=0, minf=74 00:28:21.241 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.241 filename2: (groupid=0, jobs=1): err= 0: pid=1268623: Mon Jul 15 18:53:36 2024 00:28:21.241 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10015msec) 00:28:21.241 slat (nsec): min=7013, max=43389, avg=16780.54, stdev=4239.63 00:28:21.241 clat (usec): min=21774, max=43555, avg=27747.00, stdev=965.20 00:28:21.241 lat (usec): min=21781, max=43593, avg=27763.78, stdev=965.51 00:28:21.241 clat percentiles (usec): 00:28:21.241 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:21.241 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.241 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.241 | 99.00th=[28967], 99.50th=[29230], 99.90th=[43254], 99.95th=[43254], 00:28:21.241 | 99.99th=[43779] 00:28:21.241 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2290.05, stdev=39.34, samples=20 00:28:21.241 iops : min= 544, max= 576, avg=572.50, stdev= 9.84, samples=20 00:28:21.241 lat (msec) : 50=100.00% 00:28:21.241 cpu : usr=98.84%, sys=0.78%, ctx=13, majf=0, minf=71 00:28:21.241 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.241 filename2: (groupid=0, jobs=1): err= 0: pid=1268624: Mon Jul 15 18:53:36 2024 00:28:21.241 read: IOPS=584, BW=2336KiB/s (2392kB/s)(22.8MiB/10006msec) 00:28:21.241 slat (nsec): min=6138, max=85846, avg=22674.81, stdev=17848.26 00:28:21.241 clat (usec): min=6975, max=62255, avg=27216.54, stdev=3579.71 00:28:21.241 lat (usec): min=6982, max=62274, avg=27239.22, stdev=3580.21 00:28:21.241 clat percentiles (usec): 00:28:21.241 | 1.00th=[16909], 5.00th=[21103], 10.00th=[22938], 20.00th=[27132], 00:28:21.241 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:28:21.241 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28443], 95.00th=[32637], 00:28:21.241 | 99.00th=[35390], 99.50th=[38011], 99.90th=[62129], 99.95th=[62129], 00:28:21.241 | 99.99th=[62129] 00:28:21.241 bw ( KiB/s): min= 2052, max= 2544, per=4.20%, avg=2317.68, stdev=86.01, samples=19 00:28:21.241 iops : min= 513, max= 636, avg=579.42, stdev=21.50, samples=19 00:28:21.241 lat (msec) : 10=0.17%, 20=2.64%, 50=96.92%, 100=0.27% 00:28:21.241 cpu : usr=98.74%, sys=0.87%, ctx=10, majf=0, minf=89 00:28:21.241 IO depths : 1=2.6%, 2=5.3%, 4=12.0%, 8=68.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 complete : 0=0.0%, 4=91.0%, 8=5.5%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.241 issued rwts: total=5844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.241 filename2: (groupid=0, jobs=1): err= 0: pid=1268625: Mon Jul 15 18:53:36 2024 00:28:21.241 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.4MiB/10003msec) 00:28:21.241 slat (nsec): min=7194, max=77702, avg=21996.61, stdev=7520.18 00:28:21.241 clat (usec): min=8167, max=51994, avg=27663.60, stdev=1811.72 00:28:21.242 lat (usec): min=8194, max=52018, avg=27685.59, stdev=1811.87 00:28:21.242 clat percentiles (usec): 00:28:21.242 | 1.00th=[24773], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:28:21.242 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.242 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.242 | 99.00th=[28967], 99.50th=[29230], 99.90th=[52167], 99.95th=[52167], 00:28:21.242 | 99.99th=[52167] 00:28:21.242 bw ( KiB/s): min= 2176, max= 2432, per=4.15%, avg=2290.53, stdev=58.73, samples=19 00:28:21.242 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:28:21.242 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:28:21.242 cpu : usr=98.77%, sys=0.83%, ctx=12, majf=0, minf=58 00:28:21.242 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:21.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.242 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.242 filename2: (groupid=0, jobs=1): err= 0: pid=1268626: Mon Jul 15 18:53:36 2024 00:28:21.242 read: IOPS=580, BW=2323KiB/s (2379kB/s)(22.7MiB/10008msec) 00:28:21.242 slat (usec): min=6, max=104, avg=25.32, stdev=20.81 00:28:21.242 clat (usec): min=2331, max=41668, avg=27370.82, stdev=2888.29 00:28:21.242 lat (usec): min=2349, max=41684, avg=27396.14, stdev=2889.11 00:28:21.242 clat percentiles (usec): 00:28:21.242 | 1.00th=[ 5735], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.242 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.242 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.242 | 99.00th=[28705], 99.50th=[28967], 99.90th=[38011], 99.95th=[41681], 00:28:21.242 | 99.99th=[41681] 00:28:21.242 bw ( KiB/s): min= 2176, max= 2853, per=4.20%, avg=2318.65, stdev=138.15, samples=20 00:28:21.242 iops : min= 544, max= 713, avg=579.65, stdev=34.49, samples=20 00:28:21.242 lat (msec) : 4=0.62%, 10=0.83%, 20=0.28%, 50=98.28% 00:28:21.242 cpu : usr=98.59%, sys=1.02%, ctx=20, majf=0, minf=91 00:28:21.242 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:21.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 issued rwts: total=5812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.242 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.242 filename2: (groupid=0, jobs=1): err= 0: pid=1268627: Mon Jul 15 18:53:36 2024 00:28:21.242 read: IOPS=574, BW=2296KiB/s (2351kB/s)(22.4MiB/10002msec) 00:28:21.242 slat (nsec): min=7018, max=95149, avg=39954.21, stdev=20902.45 00:28:21.242 clat (usec): min=6275, max=64761, avg=27467.82, stdev=2013.62 00:28:21.242 lat (usec): min=6284, max=64798, avg=27507.78, stdev=2015.58 00:28:21.242 clat percentiles (usec): 00:28:21.242 | 1.00th=[26084], 5.00th=[26870], 10.00th=[27132], 20.00th=[27132], 00:28:21.242 | 30.00th=[27132], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:28:21.242 | 70.00th=[27657], 80.00th=[27657], 90.00th=[27919], 95.00th=[28181], 00:28:21.242 | 99.00th=[28705], 99.50th=[34866], 99.90th=[51119], 99.95th=[64226], 00:28:21.242 | 99.99th=[64750] 00:28:21.242 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2284.00, stdev=63.37, samples=19 00:28:21.242 iops : min= 513, max= 576, avg=571.00, stdev=15.84, samples=19 00:28:21.242 lat (msec) : 10=0.28%, 20=0.33%, 50=99.11%, 100=0.28% 00:28:21.242 cpu : usr=98.70%, sys=0.91%, ctx=16, majf=0, minf=64 00:28:21.242 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:21.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 issued rwts: total=5742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.242 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.242 filename2: (groupid=0, jobs=1): err= 0: pid=1268628: Mon Jul 15 18:53:36 2024 00:28:21.242 read: IOPS=572, BW=2292KiB/s (2347kB/s)(22.4MiB/10002msec) 00:28:21.242 slat (usec): min=6, max=113, avg=29.50, stdev=18.71 00:28:21.242 clat (usec): min=8017, max=58964, avg=27671.70, stdev=2453.53 00:28:21.242 lat (usec): min=8031, max=58978, avg=27701.20, stdev=2453.08 00:28:21.242 clat percentiles (usec): 00:28:21.242 | 1.00th=[20055], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:21.242 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:21.242 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:21.242 | 99.00th=[35914], 99.50th=[40633], 99.90th=[58983], 99.95th=[58983], 00:28:21.242 | 99.99th=[58983] 00:28:21.242 bw ( KiB/s): min= 2052, max= 2432, per=4.14%, avg=2284.84, stdev=73.30, samples=19 00:28:21.242 iops : min= 513, max= 608, avg=571.21, stdev=18.33, samples=19 00:28:21.242 lat (msec) : 10=0.28%, 20=0.80%, 50=98.64%, 100=0.28% 00:28:21.242 cpu : usr=98.82%, sys=0.80%, ctx=10, majf=0, minf=70 00:28:21.242 IO depths : 1=4.7%, 2=10.1%, 4=22.2%, 8=54.8%, 16=8.2%, 32=0.0%, >=64=0.0% 00:28:21.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.242 issued rwts: total=5730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.242 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:21.242 00:28:21.242 Run status group 0 (all jobs): 00:28:21.242 READ: bw=53.9MiB/s (56.5MB/s), 2284KiB/s-2336KiB/s (2339kB/s-2392kB/s), io=540MiB (566MB), run=10001-10020msec 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.242 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 bdev_null0 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 [2024-07-15 18:53:36.860787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 bdev_null1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.243 { 00:28:21.243 "params": { 00:28:21.243 "name": "Nvme$subsystem", 00:28:21.243 "trtype": "$TEST_TRANSPORT", 00:28:21.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.243 "adrfam": "ipv4", 00:28:21.243 "trsvcid": "$NVMF_PORT", 00:28:21.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.243 "hdgst": ${hdgst:-false}, 00:28:21.243 "ddgst": ${ddgst:-false} 00:28:21.243 }, 00:28:21.243 "method": "bdev_nvme_attach_controller" 00:28:21.243 } 00:28:21.243 EOF 00:28:21.243 )") 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.243 { 00:28:21.243 "params": { 00:28:21.243 "name": "Nvme$subsystem", 00:28:21.243 "trtype": "$TEST_TRANSPORT", 00:28:21.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.243 "adrfam": "ipv4", 00:28:21.243 "trsvcid": "$NVMF_PORT", 00:28:21.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.243 "hdgst": ${hdgst:-false}, 00:28:21.243 "ddgst": ${ddgst:-false} 00:28:21.243 }, 00:28:21.243 "method": "bdev_nvme_attach_controller" 00:28:21.243 } 00:28:21.243 EOF 00:28:21.243 )") 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:21.243 "params": { 00:28:21.243 "name": "Nvme0", 00:28:21.243 "trtype": "tcp", 00:28:21.243 "traddr": "10.0.0.2", 00:28:21.243 "adrfam": "ipv4", 00:28:21.243 "trsvcid": "4420", 00:28:21.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:21.243 "hdgst": false, 00:28:21.243 "ddgst": false 00:28:21.243 }, 00:28:21.243 "method": "bdev_nvme_attach_controller" 00:28:21.243 },{ 00:28:21.243 "params": { 00:28:21.243 "name": "Nvme1", 00:28:21.243 "trtype": "tcp", 00:28:21.243 "traddr": "10.0.0.2", 00:28:21.243 "adrfam": "ipv4", 00:28:21.243 "trsvcid": "4420", 00:28:21.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.243 "hdgst": false, 00:28:21.243 "ddgst": false 00:28:21.243 }, 00:28:21.243 "method": "bdev_nvme_attach_controller" 00:28:21.243 }' 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:21.243 18:53:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.244 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:21.244 ... 00:28:21.244 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:21.244 ... 00:28:21.244 fio-3.35 00:28:21.244 Starting 4 threads 00:28:21.244 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.533 00:28:26.533 filename0: (groupid=0, jobs=1): err= 0: pid=1270465: Mon Jul 15 18:53:42 2024 00:28:26.533 read: IOPS=2670, BW=20.9MiB/s (21.9MB/s)(104MiB/5002msec) 00:28:26.533 slat (nsec): min=2949, max=64169, avg=10388.55, stdev=5004.74 00:28:26.533 clat (usec): min=942, max=7410, avg=2964.55, stdev=465.22 00:28:26.533 lat (usec): min=952, max=7420, avg=2974.94, stdev=465.50 00:28:26.533 clat percentiles (usec): 00:28:26.533 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2704], 00:28:26.533 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 2999], 00:28:26.533 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3392], 95.00th=[ 3916], 00:28:26.533 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 5145], 99.95th=[ 7242], 00:28:26.533 | 99.99th=[ 7242] 00:28:26.533 bw ( KiB/s): min=20256, max=22640, per=25.64%, avg=21366.40, stdev=749.60, samples=10 00:28:26.533 iops : min= 2532, max= 2830, avg=2670.80, stdev=93.70, samples=10 00:28:26.533 lat (usec) : 1000=0.01% 00:28:26.533 lat (msec) : 2=0.85%, 4=94.63%, 10=4.51% 00:28:26.533 cpu : usr=96.40%, sys=3.28%, ctx=10, majf=0, minf=0 00:28:26.533 IO depths : 1=0.2%, 2=5.9%, 4=66.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.533 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.533 issued rwts: total=13357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.533 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.533 filename0: (groupid=0, jobs=1): err= 0: pid=1270466: Mon Jul 15 18:53:42 2024 00:28:26.533 read: IOPS=2557, BW=20.0MiB/s (20.9MB/s)(99.9MiB/5001msec) 00:28:26.533 slat (nsec): min=4251, max=68959, avg=10671.06, stdev=7529.28 00:28:26.533 clat (usec): min=837, max=5510, avg=3096.58, stdev=520.86 00:28:26.533 lat (usec): min=846, max=5523, avg=3107.25, stdev=520.53 00:28:26.533 clat percentiles (usec): 00:28:26.533 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2802], 00:28:26.533 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:28:26.533 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3916], 95.00th=[ 4359], 00:28:26.534 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5342], 00:28:26.534 | 99.99th=[ 5473] 00:28:26.534 bw ( KiB/s): min=19728, max=21152, per=24.58%, avg=20483.56, stdev=470.69, samples=9 00:28:26.534 iops : min= 2466, max= 2644, avg=2560.44, stdev=58.84, samples=9 00:28:26.534 lat (usec) : 1000=0.01% 00:28:26.534 lat (msec) : 2=0.43%, 4=90.46%, 10=9.10% 00:28:26.534 cpu : usr=95.88%, sys=3.80%, ctx=9, majf=0, minf=0 00:28:26.534 IO depths : 1=0.2%, 2=2.7%, 4=69.7%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.534 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.534 issued rwts: total=12790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.534 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.534 filename1: (groupid=0, jobs=1): err= 0: pid=1270467: Mon Jul 15 18:53:42 2024 00:28:26.534 read: IOPS=2610, BW=20.4MiB/s (21.4MB/s)(102MiB/5001msec) 00:28:26.534 slat (nsec): min=6195, max=67919, avg=10379.63, stdev=6871.15 00:28:26.534 clat (usec): min=740, max=5769, avg=3033.30, stdev=419.74 00:28:26.534 lat (usec): min=746, max=5777, avg=3043.68, stdev=419.90 00:28:26.534 clat percentiles (usec): 00:28:26.534 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2802], 00:28:26.534 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3032], 00:28:26.534 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3916], 00:28:26.534 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[ 5342], 00:28:26.534 | 99.99th=[ 5735] 00:28:26.534 bw ( KiB/s): min=19728, max=21856, per=25.05%, avg=20876.44, stdev=615.78, samples=9 00:28:26.534 iops : min= 2466, max= 2732, avg=2609.56, stdev=76.97, samples=9 00:28:26.534 lat (usec) : 750=0.02%, 1000=0.03% 00:28:26.534 lat (msec) : 2=0.39%, 4=95.06%, 10=4.50% 00:28:26.534 cpu : usr=96.44%, sys=3.22%, ctx=11, majf=0, minf=0 00:28:26.534 IO depths : 1=0.1%, 2=2.5%, 4=70.8%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.534 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.534 issued rwts: total=13056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.534 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.534 filename1: (groupid=0, jobs=1): err= 0: pid=1270468: Mon Jul 15 18:53:42 2024 00:28:26.534 read: IOPS=2579, BW=20.2MiB/s (21.1MB/s)(101MiB/5001msec) 00:28:26.534 slat (nsec): min=6117, max=69119, avg=10924.79, stdev=7617.79 00:28:26.534 clat (usec): min=769, max=6512, avg=3069.11, stdev=460.41 00:28:26.534 lat (usec): min=777, max=6539, avg=3080.03, stdev=460.33 00:28:26.534 clat percentiles (usec): 00:28:26.534 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2835], 00:28:26.534 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:28:26.534 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 4178], 00:28:26.534 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[ 5538], 00:28:26.534 | 99.99th=[ 6456] 00:28:26.534 bw ( KiB/s): min=19968, max=21632, per=24.65%, avg=20545.78, stdev=563.95, samples=9 00:28:26.534 iops : min= 2496, max= 2704, avg=2568.22, stdev=70.49, samples=9 00:28:26.534 lat (usec) : 1000=0.05% 00:28:26.534 lat (msec) : 2=0.43%, 4=93.25%, 10=6.28% 00:28:26.534 cpu : usr=96.40%, sys=3.28%, ctx=12, majf=0, minf=9 00:28:26.534 IO depths : 1=0.1%, 2=2.0%, 4=70.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.534 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.534 issued rwts: total=12902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.534 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:26.534 00:28:26.534 Run status group 0 (all jobs): 00:28:26.534 READ: bw=81.4MiB/s (85.3MB/s), 20.0MiB/s-20.9MiB/s (20.9MB/s-21.9MB/s), io=407MiB (427MB), run=5001-5002msec 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 00:28:26.534 real 0m23.941s 00:28:26.534 user 4m51.205s 00:28:26.534 sys 0m4.542s 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 ************************************ 00:28:26.534 END TEST fio_dif_rand_params 00:28:26.534 ************************************ 00:28:26.534 18:53:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:26.534 18:53:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:26.534 18:53:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:26.534 18:53:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 ************************************ 00:28:26.534 START TEST fio_dif_digest 00:28:26.534 ************************************ 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 bdev_null0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.534 [2024-07-15 18:53:43.139631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:26.534 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.534 { 00:28:26.534 "params": { 00:28:26.534 "name": "Nvme$subsystem", 00:28:26.534 "trtype": "$TEST_TRANSPORT", 00:28:26.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.534 "adrfam": "ipv4", 00:28:26.534 "trsvcid": "$NVMF_PORT", 00:28:26.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.534 "hdgst": ${hdgst:-false}, 00:28:26.534 "ddgst": ${ddgst:-false} 00:28:26.534 }, 00:28:26.534 "method": "bdev_nvme_attach_controller" 00:28:26.534 } 00:28:26.534 EOF 00:28:26.534 )") 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.535 "params": { 00:28:26.535 "name": "Nvme0", 00:28:26.535 "trtype": "tcp", 00:28:26.535 "traddr": "10.0.0.2", 00:28:26.535 "adrfam": "ipv4", 00:28:26.535 "trsvcid": "4420", 00:28:26.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.535 "hdgst": true, 00:28:26.535 "ddgst": true 00:28:26.535 }, 00:28:26.535 "method": "bdev_nvme_attach_controller" 00:28:26.535 }' 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:26.535 18:53:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.792 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:26.792 ... 00:28:26.792 fio-3.35 00:28:26.792 Starting 3 threads 00:28:27.050 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.254 00:28:39.254 filename0: (groupid=0, jobs=1): err= 0: pid=1271668: Mon Jul 15 18:53:53 2024 00:28:39.254 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(345MiB/10045msec) 00:28:39.254 slat (nsec): min=6868, max=65183, avg=24933.49, stdev=7587.66 00:28:39.254 clat (usec): min=7021, max=47386, avg=10891.36, stdev=1265.11 00:28:39.254 lat (usec): min=7052, max=47399, avg=10916.30, stdev=1264.66 00:28:39.254 clat percentiles (usec): 00:28:39.254 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:28:39.254 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:28:39.254 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:28:39.255 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13960], 99.95th=[47449], 00:28:39.255 | 99.99th=[47449] 00:28:39.255 bw ( KiB/s): min=33536, max=36352, per=33.63%, avg=35238.40, stdev=734.83, samples=20 00:28:39.255 iops : min= 262, max= 284, avg=275.30, stdev= 5.74, samples=20 00:28:39.255 lat (msec) : 10=12.55%, 20=87.37%, 50=0.07% 00:28:39.255 cpu : usr=96.50%, sys=3.17%, ctx=19, majf=0, minf=204 00:28:39.255 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.255 issued rwts: total=2756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:39.255 filename0: (groupid=0, jobs=1): err= 0: pid=1271669: Mon Jul 15 18:53:53 2024 00:28:39.255 read: IOPS=286, BW=35.8MiB/s (37.6MB/s)(360MiB/10044msec) 00:28:39.255 slat (nsec): min=4485, max=40247, avg=18492.15, stdev=7819.07 00:28:39.255 clat (usec): min=8185, max=52042, avg=10430.39, stdev=1266.72 00:28:39.255 lat (usec): min=8211, max=52052, avg=10448.88, stdev=1266.54 00:28:39.255 clat percentiles (usec): 00:28:39.255 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:28:39.255 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:28:39.255 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:28:39.255 | 99.00th=[12125], 99.50th=[12387], 99.90th=[17957], 99.95th=[47449], 00:28:39.255 | 99.99th=[52167] 00:28:39.255 bw ( KiB/s): min=35840, max=37888, per=35.14%, avg=36825.60, stdev=513.85, samples=20 00:28:39.255 iops : min= 280, max= 296, avg=287.70, stdev= 4.01, samples=20 00:28:39.255 lat (msec) : 10=29.45%, 20=70.48%, 50=0.03%, 100=0.03% 00:28:39.255 cpu : usr=95.48%, sys=4.09%, ctx=22, majf=0, minf=170 00:28:39.255 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.255 issued rwts: total=2879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:39.255 filename0: (groupid=0, jobs=1): err= 0: pid=1271670: Mon Jul 15 18:53:53 2024 00:28:39.255 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(324MiB/10045msec) 00:28:39.255 slat (nsec): min=6493, max=46484, avg=18268.43, stdev=7898.59 00:28:39.255 clat (usec): min=8285, max=46245, avg=11601.88, stdev=1271.11 00:28:39.255 lat (usec): min=8293, max=46276, avg=11620.15, stdev=1271.12 00:28:39.255 clat percentiles (usec): 00:28:39.255 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:28:39.255 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:28:39.255 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:28:39.255 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14877], 99.95th=[45351], 00:28:39.255 | 99.99th=[46400] 00:28:39.255 bw ( KiB/s): min=31807, max=34048, per=31.59%, avg=33103.95, stdev=597.92, samples=20 00:28:39.255 iops : min= 248, max= 266, avg=258.60, stdev= 4.73, samples=20 00:28:39.255 lat (msec) : 10=2.24%, 20=97.68%, 50=0.08% 00:28:39.255 cpu : usr=95.96%, sys=3.63%, ctx=24, majf=0, minf=156 00:28:39.255 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.255 issued rwts: total=2589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.255 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:39.255 00:28:39.255 Run status group 0 (all jobs): 00:28:39.255 READ: bw=102MiB/s (107MB/s), 32.2MiB/s-35.8MiB/s (33.8MB/s-37.6MB/s), io=1028MiB (1078MB), run=10044-10045msec 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.255 00:28:39.255 real 0m11.044s 00:28:39.255 user 0m35.581s 00:28:39.255 sys 0m1.360s 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:39.255 18:53:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.255 ************************************ 00:28:39.255 END TEST fio_dif_digest 00:28:39.255 ************************************ 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:39.255 18:53:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:39.255 18:53:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.255 rmmod nvme_tcp 00:28:39.255 rmmod nvme_fabrics 00:28:39.255 rmmod nvme_keyring 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1263080 ']' 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1263080 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1263080 ']' 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1263080 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1263080 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1263080' 00:28:39.255 killing process with pid 1263080 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1263080 00:28:39.255 18:53:54 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1263080 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:39.255 18:53:54 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:40.190 Waiting for block devices as requested 00:28:40.190 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:40.449 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:40.449 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:40.449 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:40.709 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:40.709 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:40.709 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:40.709 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:40.969 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:40.969 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:40.969 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:40.969 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:41.228 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:41.228 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:41.228 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:41.487 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:41.487 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:41.487 18:53:58 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:41.487 18:53:58 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:41.487 18:53:58 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.487 18:53:58 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:41.487 18:53:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.487 18:53:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:41.487 18:53:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.068 18:54:00 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:44.068 00:28:44.068 real 1m12.137s 00:28:44.068 user 7m8.998s 00:28:44.068 sys 0m17.611s 00:28:44.068 18:54:00 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.068 18:54:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:44.068 ************************************ 00:28:44.068 END TEST nvmf_dif 00:28:44.068 ************************************ 00:28:44.068 18:54:00 -- common/autotest_common.sh@1142 -- # return 0 00:28:44.068 18:54:00 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:44.068 18:54:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:44.068 18:54:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.068 18:54:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.068 ************************************ 00:28:44.068 START TEST nvmf_abort_qd_sizes 00:28:44.068 ************************************ 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:44.068 * Looking for test storage... 00:28:44.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.068 18:54:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:49.346 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:49.347 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:49.347 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:49.347 Found net devices under 0000:86:00.0: cvl_0_0 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:49.347 Found net devices under 0000:86:00.1: cvl_0_1 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:49.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:28:49.347 00:28:49.347 --- 10.0.0.2 ping statistics --- 00:28:49.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.347 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:28:49.347 00:28:49.347 --- 10.0.0.1 ping statistics --- 00:28:49.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.347 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:49.347 18:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:51.251 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:51.251 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:51.251 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:51.251 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:51.510 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:52.446 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1279524 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1279524 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1279524 ']' 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.446 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.447 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.447 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:52.447 [2024-07-15 18:54:09.142792] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:28:52.447 [2024-07-15 18:54:09.142836] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.705 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.705 [2024-07-15 18:54:09.202063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.705 [2024-07-15 18:54:09.284581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.705 [2024-07-15 18:54:09.284617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.705 [2024-07-15 18:54:09.284624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.705 [2024-07-15 18:54:09.284630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.705 [2024-07-15 18:54:09.284636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.705 [2024-07-15 18:54:09.284671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.705 [2024-07-15 18:54:09.284765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.705 [2024-07-15 18:54:09.284850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.705 [2024-07-15 18:54:09.284852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.270 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.270 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:53.270 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.270 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:53.270 18:54:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:53.528 18:54:09 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.528 18:54:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:53.528 18:54:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:28:53.529 18:54:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.529 18:54:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:53.529 ************************************ 00:28:53.529 START TEST spdk_target_abort 00:28:53.529 ************************************ 00:28:53.529 18:54:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:53.529 18:54:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:53.529 18:54:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:28:53.529 18:54:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.529 18:54:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.810 spdk_targetn1 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.810 [2024-07-15 18:54:12.875960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.810 [2024-07-15 18:54:12.904831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:56.810 18:54:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.810 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.098 Initializing NVMe Controllers 00:29:00.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:00.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:00.098 Initialization complete. Launching workers. 00:29:00.098 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15626, failed: 0 00:29:00.098 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1447, failed to submit 14179 00:29:00.098 success 802, unsuccess 645, failed 0 00:29:00.098 18:54:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:00.098 18:54:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:00.098 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.388 Initializing NVMe Controllers 00:29:03.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:03.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:03.388 Initialization complete. Launching workers. 00:29:03.388 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8536, failed: 0 00:29:03.388 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7299 00:29:03.388 success 330, unsuccess 907, failed 0 00:29:03.388 18:54:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:03.388 18:54:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:03.388 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.680 Initializing NVMe Controllers 00:29:06.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:06.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:06.680 Initialization complete. Launching workers. 00:29:06.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38013, failed: 0 00:29:06.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2764, failed to submit 35249 00:29:06.680 success 609, unsuccess 2155, failed 0 00:29:06.680 18:54:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:06.680 18:54:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.680 18:54:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:06.680 18:54:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.680 18:54:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:06.680 18:54:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.680 18:54:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1279524 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1279524 ']' 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1279524 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1279524 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1279524' 00:29:07.655 killing process with pid 1279524 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1279524 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1279524 00:29:07.655 00:29:07.655 real 0m14.197s 00:29:07.655 user 0m56.588s 00:29:07.655 sys 0m2.330s 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:07.655 ************************************ 00:29:07.655 END TEST spdk_target_abort 00:29:07.655 ************************************ 00:29:07.655 18:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:07.655 18:54:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:07.655 18:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:07.655 18:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.655 18:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:07.655 ************************************ 00:29:07.655 START TEST kernel_target_abort 00:29:07.655 ************************************ 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:07.655 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:07.950 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:07.950 18:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:10.489 Waiting for block devices as requested 00:29:10.489 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:10.489 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:10.489 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:10.489 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:10.489 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:10.489 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:10.748 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:10.748 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:10.748 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:10.748 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:11.007 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:11.007 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:11.007 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:11.007 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:11.267 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:11.267 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:11.267 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:11.267 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:11.527 No valid GPT data, bailing 00:29:11.527 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:11.527 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:11.527 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:11.527 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:11.527 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:11.527 18:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:11.527 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:11.527 00:29:11.527 Discovery Log Number of Records 2, Generation counter 2 00:29:11.527 =====Discovery Log Entry 0====== 00:29:11.527 trtype: tcp 00:29:11.528 adrfam: ipv4 00:29:11.528 subtype: current discovery subsystem 00:29:11.528 treq: not specified, sq flow control disable supported 00:29:11.528 portid: 1 00:29:11.528 trsvcid: 4420 00:29:11.528 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:11.528 traddr: 10.0.0.1 00:29:11.528 eflags: none 00:29:11.528 sectype: none 00:29:11.528 =====Discovery Log Entry 1====== 00:29:11.528 trtype: tcp 00:29:11.528 adrfam: ipv4 00:29:11.528 subtype: nvme subsystem 00:29:11.528 treq: not specified, sq flow control disable supported 00:29:11.528 portid: 1 00:29:11.528 trsvcid: 4420 00:29:11.528 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:11.528 traddr: 10.0.0.1 00:29:11.528 eflags: none 00:29:11.528 sectype: none 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:11.528 18:54:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.528 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.820 Initializing NVMe Controllers 00:29:14.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:14.820 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:14.820 Initialization complete. Launching workers. 00:29:14.820 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80553, failed: 0 00:29:14.820 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80553, failed to submit 0 00:29:14.820 success 0, unsuccess 80553, failed 0 00:29:14.820 18:54:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.820 18:54:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.820 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.118 Initializing NVMe Controllers 00:29:18.118 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:18.118 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:18.118 Initialization complete. Launching workers. 00:29:18.118 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133391, failed: 0 00:29:18.118 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33478, failed to submit 99913 00:29:18.118 success 0, unsuccess 33478, failed 0 00:29:18.118 18:54:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:18.118 18:54:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:18.118 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.406 Initializing NVMe Controllers 00:29:21.406 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:21.406 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:21.406 Initialization complete. Launching workers. 00:29:21.406 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 127923, failed: 0 00:29:21.406 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32002, failed to submit 95921 00:29:21.406 success 0, unsuccess 32002, failed 0 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:21.406 18:54:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:23.312 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:23.312 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:23.879 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:23.879 00:29:23.879 real 0m16.264s 00:29:23.879 user 0m7.737s 00:29:23.879 sys 0m4.587s 00:29:23.879 18:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:23.879 18:54:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.879 ************************************ 00:29:23.879 END TEST kernel_target_abort 00:29:23.879 ************************************ 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:24.136 rmmod nvme_tcp 00:29:24.136 rmmod nvme_fabrics 00:29:24.136 rmmod nvme_keyring 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1279524 ']' 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1279524 00:29:24.136 18:54:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1279524 ']' 00:29:24.137 18:54:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1279524 00:29:24.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1279524) - No such process 00:29:24.137 18:54:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1279524 is not found' 00:29:24.137 Process with pid 1279524 is not found 00:29:24.137 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:24.137 18:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:26.039 Waiting for block devices as requested 00:29:26.297 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:26.297 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:26.297 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:26.555 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:26.555 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:26.555 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:26.555 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:26.812 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:26.812 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:26.812 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:26.812 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:27.071 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:27.071 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:27.071 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:27.071 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:27.329 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:27.329 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:27.329 18:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:27.329 18:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:27.329 18:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.329 18:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:27.329 18:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.329 18:54:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:27.329 18:54:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.912 18:54:46 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:29.912 00:29:29.912 real 0m45.786s 00:29:29.912 user 1m7.879s 00:29:29.912 sys 0m14.451s 00:29:29.912 18:54:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.912 18:54:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:29.912 ************************************ 00:29:29.912 END TEST nvmf_abort_qd_sizes 00:29:29.912 ************************************ 00:29:29.912 18:54:46 -- common/autotest_common.sh@1142 -- # return 0 00:29:29.912 18:54:46 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:29.912 18:54:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:29.912 18:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.912 18:54:46 -- common/autotest_common.sh@10 -- # set +x 00:29:29.912 ************************************ 00:29:29.912 START TEST keyring_file 00:29:29.912 ************************************ 00:29:29.912 18:54:46 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:29.912 * Looking for test storage... 00:29:29.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.912 18:54:46 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.912 18:54:46 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.912 18:54:46 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.912 18:54:46 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.912 18:54:46 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.912 18:54:46 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.912 18:54:46 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:29.912 18:54:46 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.T9PmHl2cKI 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.T9PmHl2cKI 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.T9PmHl2cKI 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.T9PmHl2cKI 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mHSfw4Ur4w 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:29.912 18:54:46 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mHSfw4Ur4w 00:29:29.912 18:54:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mHSfw4Ur4w 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mHSfw4Ur4w 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@30 -- # tgtpid=1288296 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1288296 00:29:29.912 18:54:46 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1288296 ']' 00:29:29.912 18:54:46 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.912 18:54:46 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.912 18:54:46 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.912 18:54:46 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.912 18:54:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:29.912 18:54:46 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:29.912 [2024-07-15 18:54:46.370895] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:29:29.913 [2024-07-15 18:54:46.370942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288296 ] 00:29:29.913 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.913 [2024-07-15 18:54:46.422486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.913 [2024-07-15 18:54:46.501693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.478 18:54:47 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.478 18:54:47 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:30.478 18:54:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:30.478 18:54:47 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.478 18:54:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.478 [2024-07-15 18:54:47.168115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.737 null0 00:29:30.737 [2024-07-15 18:54:47.200174] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:30.737 [2024-07-15 18:54:47.200491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:30.737 [2024-07-15 18:54:47.208192] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.737 18:54:47 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.737 [2024-07-15 18:54:47.216214] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:30.737 request: 00:29:30.737 { 00:29:30.737 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.737 "secure_channel": false, 00:29:30.737 "listen_address": { 00:29:30.737 "trtype": "tcp", 00:29:30.737 "traddr": "127.0.0.1", 00:29:30.737 "trsvcid": "4420" 00:29:30.737 }, 00:29:30.737 "method": "nvmf_subsystem_add_listener", 00:29:30.737 "req_id": 1 00:29:30.737 } 00:29:30.737 Got JSON-RPC error response 00:29:30.737 response: 00:29:30.737 { 00:29:30.737 "code": -32602, 00:29:30.737 "message": "Invalid parameters" 00:29:30.737 } 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:30.737 18:54:47 keyring_file -- keyring/file.sh@46 -- # bperfpid=1288530 00:29:30.737 18:54:47 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1288530 /var/tmp/bperf.sock 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1288530 ']' 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.737 18:54:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.737 18:54:47 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:30.737 [2024-07-15 18:54:47.264155] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:29:30.737 [2024-07-15 18:54:47.264199] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288530 ] 00:29:30.737 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.737 [2024-07-15 18:54:47.318137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.737 [2024-07-15 18:54:47.396912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.673 18:54:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.673 18:54:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:31.673 18:54:48 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:31.673 18:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:31.673 18:54:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mHSfw4Ur4w 00:29:31.673 18:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mHSfw4Ur4w 00:29:31.932 18:54:48 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:31.932 18:54:48 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:31.932 18:54:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.932 18:54:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:31.932 18:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.932 18:54:48 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.T9PmHl2cKI == \/\t\m\p\/\t\m\p\.\T\9\P\m\H\l\2\c\K\I ]] 00:29:31.932 18:54:48 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:31.932 18:54:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:31.932 18:54:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.932 18:54:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:31.932 18:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.189 18:54:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mHSfw4Ur4w == \/\t\m\p\/\t\m\p\.\m\H\S\f\w\4\U\r\4\w ]] 00:29:32.189 18:54:48 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:32.189 18:54:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.189 18:54:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.189 18:54:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.189 18:54:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.189 18:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.448 18:54:48 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:32.448 18:54:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:32.448 18:54:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:32.448 18:54:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.448 18:54:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:32.448 18:54:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.448 18:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.448 18:54:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:32.448 18:54:49 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.448 18:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.705 [2024-07-15 18:54:49.257815] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:32.705 nvme0n1 00:29:32.705 18:54:49 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:32.705 18:54:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.705 18:54:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:32.705 18:54:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.705 18:54:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.705 18:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.962 18:54:49 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:32.962 18:54:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:32.962 18:54:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:32.962 18:54:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.962 18:54:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.962 18:54:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:32.962 18:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.221 18:54:49 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:33.221 18:54:49 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.221 Running I/O for 1 seconds... 00:29:34.155 00:29:34.155 Latency(us) 00:29:34.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.155 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:34.155 nvme0n1 : 1.00 14061.18 54.93 0.00 0.00 9078.95 4729.99 16526.47 00:29:34.155 =================================================================================================================== 00:29:34.155 Total : 14061.18 54.93 0.00 0.00 9078.95 4729.99 16526.47 00:29:34.155 0 00:29:34.155 18:54:50 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:34.155 18:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:34.413 18:54:50 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:34.413 18:54:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.413 18:54:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.413 18:54:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.413 18:54:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.413 18:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.671 18:54:51 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:34.671 18:54:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:34.671 18:54:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:34.671 18:54:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.671 18:54:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.671 18:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.671 18:54:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:34.671 18:54:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:34.671 18:54:51 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:34.671 18:54:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:34.671 18:54:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:34.671 18:54:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:34.671 18:54:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:34.671 18:54:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:34.671 18:54:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:34.671 18:54:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:34.671 18:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:34.929 [2024-07-15 18:54:51.494864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:34.929 [2024-07-15 18:54:51.495040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643770 (107): Transport endpoint is not connected 00:29:34.929 [2024-07-15 18:54:51.496034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1643770 (9): Bad file descriptor 00:29:34.929 [2024-07-15 18:54:51.497035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:34.929 [2024-07-15 18:54:51.497046] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:34.929 [2024-07-15 18:54:51.497053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:34.929 request: 00:29:34.929 { 00:29:34.929 "name": "nvme0", 00:29:34.929 "trtype": "tcp", 00:29:34.929 "traddr": "127.0.0.1", 00:29:34.929 "adrfam": "ipv4", 00:29:34.929 "trsvcid": "4420", 00:29:34.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:34.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:34.929 "prchk_reftag": false, 00:29:34.929 "prchk_guard": false, 00:29:34.929 "hdgst": false, 00:29:34.929 "ddgst": false, 00:29:34.929 "psk": "key1", 00:29:34.929 "method": "bdev_nvme_attach_controller", 00:29:34.929 "req_id": 1 00:29:34.929 } 00:29:34.929 Got JSON-RPC error response 00:29:34.929 response: 00:29:34.929 { 00:29:34.929 "code": -5, 00:29:34.929 "message": "Input/output error" 00:29:34.930 } 00:29:34.930 18:54:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:34.930 18:54:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:34.930 18:54:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:34.930 18:54:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:34.930 18:54:51 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:34.930 18:54:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.930 18:54:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.930 18:54:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.930 18:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.930 18:54:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:35.188 18:54:51 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:35.188 18:54:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:35.188 18:54:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:35.188 18:54:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.188 18:54:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.188 18:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.188 18:54:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:35.188 18:54:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:35.188 18:54:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:35.188 18:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:35.447 18:54:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:35.447 18:54:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:35.706 18:54:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:35.706 18:54:52 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:35.706 18:54:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.706 18:54:52 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:35.706 18:54:52 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.T9PmHl2cKI 00:29:35.706 18:54:52 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:35.707 18:54:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:35.707 18:54:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:35.707 18:54:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:35.707 18:54:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:35.707 18:54:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:35.707 18:54:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:35.707 18:54:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:35.707 18:54:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:35.966 [2024-07-15 18:54:52.557129] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.T9PmHl2cKI': 0100660 00:29:35.966 [2024-07-15 18:54:52.557155] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:35.966 request: 00:29:35.966 { 00:29:35.966 "name": "key0", 00:29:35.966 "path": "/tmp/tmp.T9PmHl2cKI", 00:29:35.966 "method": "keyring_file_add_key", 00:29:35.966 "req_id": 1 00:29:35.966 } 00:29:35.966 Got JSON-RPC error response 00:29:35.966 response: 00:29:35.966 { 00:29:35.966 "code": -1, 00:29:35.966 "message": "Operation not permitted" 00:29:35.966 } 00:29:35.966 18:54:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:35.966 18:54:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:35.966 18:54:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:35.966 18:54:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:35.966 18:54:52 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.T9PmHl2cKI 00:29:35.966 18:54:52 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:35.966 18:54:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.T9PmHl2cKI 00:29:36.225 18:54:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.T9PmHl2cKI 00:29:36.225 18:54:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:36.225 18:54:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:36.225 18:54:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.225 18:54:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.225 18:54:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.225 18:54:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:36.225 18:54:52 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:36.226 18:54:52 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.226 18:54:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:36.226 18:54:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.226 18:54:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:36.226 18:54:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.226 18:54:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:36.226 18:54:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:36.226 18:54:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.226 18:54:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.531 [2024-07-15 18:54:53.086535] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.T9PmHl2cKI': No such file or directory 00:29:36.531 [2024-07-15 18:54:53.086560] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:36.531 [2024-07-15 18:54:53.086581] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:36.531 [2024-07-15 18:54:53.086586] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:36.531 [2024-07-15 18:54:53.086592] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:36.531 request: 00:29:36.531 { 00:29:36.531 "name": "nvme0", 00:29:36.531 "trtype": "tcp", 00:29:36.531 "traddr": "127.0.0.1", 00:29:36.531 "adrfam": "ipv4", 00:29:36.531 "trsvcid": "4420", 00:29:36.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.531 "prchk_reftag": false, 00:29:36.531 "prchk_guard": false, 00:29:36.531 "hdgst": false, 00:29:36.531 "ddgst": false, 00:29:36.531 "psk": "key0", 00:29:36.531 "method": "bdev_nvme_attach_controller", 00:29:36.531 "req_id": 1 00:29:36.531 } 00:29:36.531 Got JSON-RPC error response 00:29:36.531 response: 00:29:36.531 { 00:29:36.531 "code": -19, 00:29:36.531 "message": "No such device" 00:29:36.531 } 00:29:36.531 18:54:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:36.531 18:54:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:36.531 18:54:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:36.531 18:54:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:36.531 18:54:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:36.531 18:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:36.790 18:54:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EZ59Yqx1UZ 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:36.790 18:54:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:36.790 18:54:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:36.790 18:54:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:36.790 18:54:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:36.790 18:54:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:36.790 18:54:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EZ59Yqx1UZ 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EZ59Yqx1UZ 00:29:36.790 18:54:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.EZ59Yqx1UZ 00:29:36.790 18:54:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EZ59Yqx1UZ 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EZ59Yqx1UZ 00:29:36.790 18:54:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:36.790 18:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.048 nvme0n1 00:29:37.048 18:54:53 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:37.048 18:54:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.048 18:54:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.048 18:54:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.048 18:54:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.048 18:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.306 18:54:53 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:37.306 18:54:53 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:37.306 18:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:37.565 18:54:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:37.565 18:54:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:37.565 18:54:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.565 18:54:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.565 18:54:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.565 18:54:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:37.565 18:54:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:37.824 18:54:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.824 18:54:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.824 18:54:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.824 18:54:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.824 18:54:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.824 18:54:54 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:37.824 18:54:54 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:37.824 18:54:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:38.084 18:54:54 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:38.084 18:54:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:38.084 18:54:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.343 18:54:54 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:38.343 18:54:54 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EZ59Yqx1UZ 00:29:38.343 18:54:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EZ59Yqx1UZ 00:29:38.343 18:54:54 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mHSfw4Ur4w 00:29:38.343 18:54:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mHSfw4Ur4w 00:29:38.603 18:54:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.603 18:54:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:38.863 nvme0n1 00:29:38.863 18:54:55 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:38.863 18:54:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:39.123 18:54:55 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:39.123 "subsystems": [ 00:29:39.123 { 00:29:39.123 "subsystem": "keyring", 00:29:39.123 "config": [ 00:29:39.123 { 00:29:39.123 "method": "keyring_file_add_key", 00:29:39.123 "params": { 00:29:39.123 "name": "key0", 00:29:39.123 "path": "/tmp/tmp.EZ59Yqx1UZ" 00:29:39.123 } 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "method": "keyring_file_add_key", 00:29:39.123 "params": { 00:29:39.123 "name": "key1", 00:29:39.123 "path": "/tmp/tmp.mHSfw4Ur4w" 00:29:39.123 } 00:29:39.123 } 00:29:39.123 ] 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "subsystem": "iobuf", 00:29:39.123 "config": [ 00:29:39.123 { 00:29:39.123 "method": "iobuf_set_options", 00:29:39.123 "params": { 00:29:39.123 "small_pool_count": 8192, 00:29:39.123 "large_pool_count": 1024, 00:29:39.123 "small_bufsize": 8192, 00:29:39.123 "large_bufsize": 135168 00:29:39.123 } 00:29:39.123 } 00:29:39.123 ] 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "subsystem": "sock", 00:29:39.123 "config": [ 00:29:39.123 { 00:29:39.123 "method": "sock_set_default_impl", 00:29:39.123 "params": { 00:29:39.123 "impl_name": "posix" 00:29:39.123 } 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "method": "sock_impl_set_options", 00:29:39.123 "params": { 00:29:39.123 "impl_name": "ssl", 00:29:39.123 "recv_buf_size": 4096, 00:29:39.123 "send_buf_size": 4096, 00:29:39.123 "enable_recv_pipe": true, 00:29:39.123 "enable_quickack": false, 00:29:39.123 "enable_placement_id": 0, 00:29:39.123 "enable_zerocopy_send_server": true, 00:29:39.123 "enable_zerocopy_send_client": false, 00:29:39.123 "zerocopy_threshold": 0, 00:29:39.123 "tls_version": 0, 00:29:39.123 "enable_ktls": false 00:29:39.123 } 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "method": "sock_impl_set_options", 00:29:39.123 "params": { 00:29:39.123 "impl_name": "posix", 00:29:39.123 "recv_buf_size": 2097152, 00:29:39.123 "send_buf_size": 2097152, 00:29:39.123 "enable_recv_pipe": true, 00:29:39.123 "enable_quickack": false, 00:29:39.123 "enable_placement_id": 0, 00:29:39.123 "enable_zerocopy_send_server": true, 00:29:39.123 "enable_zerocopy_send_client": false, 00:29:39.123 "zerocopy_threshold": 0, 00:29:39.123 "tls_version": 0, 00:29:39.123 "enable_ktls": false 00:29:39.123 } 00:29:39.123 } 00:29:39.123 ] 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "subsystem": "vmd", 00:29:39.123 "config": [] 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "subsystem": "accel", 00:29:39.123 "config": [ 00:29:39.123 { 00:29:39.123 "method": "accel_set_options", 00:29:39.123 "params": { 00:29:39.123 "small_cache_size": 128, 00:29:39.123 "large_cache_size": 16, 00:29:39.123 "task_count": 2048, 00:29:39.123 "sequence_count": 2048, 00:29:39.123 "buf_count": 2048 00:29:39.123 } 00:29:39.123 } 00:29:39.123 ] 00:29:39.123 }, 00:29:39.123 { 00:29:39.123 "subsystem": "bdev", 00:29:39.123 "config": [ 00:29:39.123 { 00:29:39.123 "method": "bdev_set_options", 00:29:39.123 "params": { 00:29:39.123 "bdev_io_pool_size": 65535, 00:29:39.123 "bdev_io_cache_size": 256, 00:29:39.124 "bdev_auto_examine": true, 00:29:39.124 "iobuf_small_cache_size": 128, 00:29:39.124 "iobuf_large_cache_size": 16 00:29:39.124 } 00:29:39.124 }, 00:29:39.124 { 00:29:39.124 "method": "bdev_raid_set_options", 00:29:39.124 "params": { 00:29:39.124 "process_window_size_kb": 1024 00:29:39.124 } 00:29:39.124 }, 00:29:39.124 { 00:29:39.124 "method": "bdev_iscsi_set_options", 00:29:39.124 "params": { 00:29:39.124 "timeout_sec": 30 00:29:39.124 } 00:29:39.124 }, 00:29:39.124 { 00:29:39.124 "method": "bdev_nvme_set_options", 00:29:39.124 "params": { 00:29:39.124 "action_on_timeout": "none", 00:29:39.124 "timeout_us": 0, 00:29:39.124 "timeout_admin_us": 0, 00:29:39.124 "keep_alive_timeout_ms": 10000, 00:29:39.124 "arbitration_burst": 0, 00:29:39.124 "low_priority_weight": 0, 00:29:39.124 "medium_priority_weight": 0, 00:29:39.124 "high_priority_weight": 0, 00:29:39.124 "nvme_adminq_poll_period_us": 10000, 00:29:39.124 "nvme_ioq_poll_period_us": 0, 00:29:39.124 "io_queue_requests": 512, 00:29:39.124 "delay_cmd_submit": true, 00:29:39.124 "transport_retry_count": 4, 00:29:39.124 "bdev_retry_count": 3, 00:29:39.124 "transport_ack_timeout": 0, 00:29:39.124 "ctrlr_loss_timeout_sec": 0, 00:29:39.124 "reconnect_delay_sec": 0, 00:29:39.124 "fast_io_fail_timeout_sec": 0, 00:29:39.124 "disable_auto_failback": false, 00:29:39.124 "generate_uuids": false, 00:29:39.124 "transport_tos": 0, 00:29:39.124 "nvme_error_stat": false, 00:29:39.124 "rdma_srq_size": 0, 00:29:39.124 "io_path_stat": false, 00:29:39.124 "allow_accel_sequence": false, 00:29:39.124 "rdma_max_cq_size": 0, 00:29:39.124 "rdma_cm_event_timeout_ms": 0, 00:29:39.124 "dhchap_digests": [ 00:29:39.124 "sha256", 00:29:39.124 "sha384", 00:29:39.124 "sha512" 00:29:39.124 ], 00:29:39.124 "dhchap_dhgroups": [ 00:29:39.124 "null", 00:29:39.124 "ffdhe2048", 00:29:39.124 "ffdhe3072", 00:29:39.124 "ffdhe4096", 00:29:39.124 "ffdhe6144", 00:29:39.124 "ffdhe8192" 00:29:39.124 ] 00:29:39.124 } 00:29:39.124 }, 00:29:39.124 { 00:29:39.124 "method": "bdev_nvme_attach_controller", 00:29:39.124 "params": { 00:29:39.124 "name": "nvme0", 00:29:39.124 "trtype": "TCP", 00:29:39.124 "adrfam": "IPv4", 00:29:39.124 "traddr": "127.0.0.1", 00:29:39.124 "trsvcid": "4420", 00:29:39.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.124 "prchk_reftag": false, 00:29:39.124 "prchk_guard": false, 00:29:39.124 "ctrlr_loss_timeout_sec": 0, 00:29:39.124 "reconnect_delay_sec": 0, 00:29:39.124 "fast_io_fail_timeout_sec": 0, 00:29:39.124 "psk": "key0", 00:29:39.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.124 "hdgst": false, 00:29:39.124 "ddgst": false 00:29:39.124 } 00:29:39.124 }, 00:29:39.124 { 00:29:39.124 "method": "bdev_nvme_set_hotplug", 00:29:39.124 "params": { 00:29:39.124 "period_us": 100000, 00:29:39.124 "enable": false 00:29:39.124 } 00:29:39.124 }, 00:29:39.124 { 00:29:39.124 "method": "bdev_wait_for_examine" 00:29:39.124 } 00:29:39.124 ] 00:29:39.124 }, 00:29:39.124 { 00:29:39.124 "subsystem": "nbd", 00:29:39.124 "config": [] 00:29:39.124 } 00:29:39.124 ] 00:29:39.124 }' 00:29:39.124 18:54:55 keyring_file -- keyring/file.sh@114 -- # killprocess 1288530 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1288530 ']' 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1288530 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1288530 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1288530' 00:29:39.124 killing process with pid 1288530 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@967 -- # kill 1288530 00:29:39.124 Received shutdown signal, test time was about 1.000000 seconds 00:29:39.124 00:29:39.124 Latency(us) 00:29:39.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.124 =================================================================================================================== 00:29:39.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.124 18:54:55 keyring_file -- common/autotest_common.sh@972 -- # wait 1288530 00:29:39.385 18:54:55 keyring_file -- keyring/file.sh@117 -- # bperfpid=1290045 00:29:39.385 18:54:55 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1290045 /var/tmp/bperf.sock 00:29:39.385 18:54:55 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1290045 ']' 00:29:39.385 18:54:55 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:39.385 18:54:55 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:39.385 18:54:55 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.385 18:54:55 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:39.385 "subsystems": [ 00:29:39.385 { 00:29:39.385 "subsystem": "keyring", 00:29:39.385 "config": [ 00:29:39.385 { 00:29:39.385 "method": "keyring_file_add_key", 00:29:39.385 "params": { 00:29:39.385 "name": "key0", 00:29:39.385 "path": "/tmp/tmp.EZ59Yqx1UZ" 00:29:39.385 } 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "method": "keyring_file_add_key", 00:29:39.385 "params": { 00:29:39.385 "name": "key1", 00:29:39.385 "path": "/tmp/tmp.mHSfw4Ur4w" 00:29:39.385 } 00:29:39.385 } 00:29:39.385 ] 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "subsystem": "iobuf", 00:29:39.385 "config": [ 00:29:39.385 { 00:29:39.385 "method": "iobuf_set_options", 00:29:39.385 "params": { 00:29:39.385 "small_pool_count": 8192, 00:29:39.385 "large_pool_count": 1024, 00:29:39.385 "small_bufsize": 8192, 00:29:39.385 "large_bufsize": 135168 00:29:39.385 } 00:29:39.385 } 00:29:39.385 ] 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "subsystem": "sock", 00:29:39.385 "config": [ 00:29:39.385 { 00:29:39.385 "method": "sock_set_default_impl", 00:29:39.385 "params": { 00:29:39.385 "impl_name": "posix" 00:29:39.385 } 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "method": "sock_impl_set_options", 00:29:39.385 "params": { 00:29:39.385 "impl_name": "ssl", 00:29:39.385 "recv_buf_size": 4096, 00:29:39.385 "send_buf_size": 4096, 00:29:39.385 "enable_recv_pipe": true, 00:29:39.385 "enable_quickack": false, 00:29:39.385 "enable_placement_id": 0, 00:29:39.385 "enable_zerocopy_send_server": true, 00:29:39.385 "enable_zerocopy_send_client": false, 00:29:39.385 "zerocopy_threshold": 0, 00:29:39.385 "tls_version": 0, 00:29:39.385 "enable_ktls": false 00:29:39.385 } 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "method": "sock_impl_set_options", 00:29:39.385 "params": { 00:29:39.385 "impl_name": "posix", 00:29:39.385 "recv_buf_size": 2097152, 00:29:39.385 "send_buf_size": 2097152, 00:29:39.385 "enable_recv_pipe": true, 00:29:39.385 "enable_quickack": false, 00:29:39.385 "enable_placement_id": 0, 00:29:39.385 "enable_zerocopy_send_server": true, 00:29:39.385 "enable_zerocopy_send_client": false, 00:29:39.385 "zerocopy_threshold": 0, 00:29:39.385 "tls_version": 0, 00:29:39.385 "enable_ktls": false 00:29:39.385 } 00:29:39.385 } 00:29:39.385 ] 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "subsystem": "vmd", 00:29:39.385 "config": [] 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "subsystem": "accel", 00:29:39.385 "config": [ 00:29:39.385 { 00:29:39.385 "method": "accel_set_options", 00:29:39.385 "params": { 00:29:39.385 "small_cache_size": 128, 00:29:39.385 "large_cache_size": 16, 00:29:39.385 "task_count": 2048, 00:29:39.385 "sequence_count": 2048, 00:29:39.385 "buf_count": 2048 00:29:39.385 } 00:29:39.385 } 00:29:39.385 ] 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "subsystem": "bdev", 00:29:39.385 "config": [ 00:29:39.385 { 00:29:39.385 "method": "bdev_set_options", 00:29:39.385 "params": { 00:29:39.385 "bdev_io_pool_size": 65535, 00:29:39.385 "bdev_io_cache_size": 256, 00:29:39.385 "bdev_auto_examine": true, 00:29:39.385 "iobuf_small_cache_size": 128, 00:29:39.385 "iobuf_large_cache_size": 16 00:29:39.385 } 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "method": "bdev_raid_set_options", 00:29:39.385 "params": { 00:29:39.385 "process_window_size_kb": 1024 00:29:39.385 } 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "method": "bdev_iscsi_set_options", 00:29:39.385 "params": { 00:29:39.385 "timeout_sec": 30 00:29:39.385 } 00:29:39.385 }, 00:29:39.385 { 00:29:39.385 "method": "bdev_nvme_set_options", 00:29:39.385 "params": { 00:29:39.385 "action_on_timeout": "none", 00:29:39.385 "timeout_us": 0, 00:29:39.385 "timeout_admin_us": 0, 00:29:39.385 "keep_alive_timeout_ms": 10000, 00:29:39.385 "arbitration_burst": 0, 00:29:39.385 "low_priority_weight": 0, 00:29:39.385 "medium_priority_weight": 0, 00:29:39.385 "high_priority_weight": 0, 00:29:39.385 "nvme_adminq_poll_period_us": 10000, 00:29:39.385 "nvme_ioq_poll_period_us": 0, 00:29:39.385 "io_queue_requests": 512, 00:29:39.385 "delay_cmd_submit": true, 00:29:39.385 "transport_retry_count": 4, 00:29:39.385 "bdev_retry_count": 3, 00:29:39.385 "transport_ack_timeout": 0, 00:29:39.385 "ctrlr_loss_timeout_sec": 0, 00:29:39.385 "reconnect_delay_sec": 0, 00:29:39.385 "fast_io_fail_timeout_sec": 0, 00:29:39.385 "disable_auto_failback": false, 00:29:39.385 "generate_uuids": false, 00:29:39.385 "transport_tos": 0, 00:29:39.385 "nvme_error_stat": false, 00:29:39.385 "rdma_srq_size": 0, 00:29:39.385 "io_path_stat": false, 00:29:39.385 "allow_accel_sequence": false, 00:29:39.385 "rdma_max_cq_size": 0, 00:29:39.385 "rdma_cm_event_timeout_ms": 0, 00:29:39.385 "dhchap_digests": [ 00:29:39.385 "sha256", 00:29:39.385 "sha384", 00:29:39.385 "sha512" 00:29:39.385 ], 00:29:39.386 "dhchap_dhgroups": [ 00:29:39.386 "null", 00:29:39.386 "ffdhe2048", 00:29:39.386 "ffdhe3072", 00:29:39.386 "ffdhe4096", 00:29:39.386 "ffdhe6144", 00:29:39.386 "ffdhe8192" 00:29:39.386 ] 00:29:39.386 } 00:29:39.386 }, 00:29:39.386 { 00:29:39.386 "method": "bdev_nvme_attach_controller", 00:29:39.386 "params": { 00:29:39.386 "name": "nvme0", 00:29:39.386 "trtype": "TCP", 00:29:39.386 "adrfam": "IPv4", 00:29:39.386 "traddr": "127.0.0.1", 00:29:39.386 "trsvcid": "4420", 00:29:39.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.386 "prchk_reftag": false, 00:29:39.386 "prchk_guard": false, 00:29:39.386 "ctrlr_loss_timeout_sec": 0, 00:29:39.386 "reconnect_delay_sec": 0, 00:29:39.386 "fast_io_fail_timeout_sec": 0, 00:29:39.386 "psk": "key0", 00:29:39.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.386 "hdgst": false, 00:29:39.386 "ddgst": false 00:29:39.386 } 00:29:39.386 }, 00:29:39.386 { 00:29:39.386 "method": "bdev_nvme_set_hotplug", 00:29:39.386 "params": { 00:29:39.386 "period_us": 100000, 00:29:39.386 "enable": false 00:29:39.386 } 00:29:39.386 }, 00:29:39.386 { 00:29:39.386 "method": "bdev_wait_for_examine" 00:29:39.386 } 00:29:39.386 ] 00:29:39.386 }, 00:29:39.386 { 00:29:39.386 "subsystem": "nbd", 00:29:39.386 "config": [] 00:29:39.386 } 00:29:39.386 ] 00:29:39.386 }' 00:29:39.386 18:54:55 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:39.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:39.386 18:54:55 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.386 18:54:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:39.386 [2024-07-15 18:54:55.940373] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:29:39.386 [2024-07-15 18:54:55.940424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290045 ] 00:29:39.386 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.386 [2024-07-15 18:54:55.993238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.386 [2024-07-15 18:54:56.072967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.644 [2024-07-15 18:54:56.231038] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:40.213 18:54:56 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.213 18:54:56 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:40.213 18:54:56 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:40.213 18:54:56 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:40.213 18:54:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.472 18:54:56 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:40.472 18:54:56 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:40.472 18:54:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.472 18:54:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.472 18:54:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.472 18:54:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.472 18:54:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.472 18:54:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:40.472 18:54:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:40.472 18:54:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:40.472 18:54:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.472 18:54:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.472 18:54:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.472 18:54:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:40.731 18:54:57 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:40.731 18:54:57 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:40.731 18:54:57 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:40.731 18:54:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:40.990 18:54:57 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:40.990 18:54:57 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:40.990 18:54:57 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.EZ59Yqx1UZ /tmp/tmp.mHSfw4Ur4w 00:29:40.990 18:54:57 keyring_file -- keyring/file.sh@20 -- # killprocess 1290045 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1290045 ']' 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1290045 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290045 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290045' 00:29:40.990 killing process with pid 1290045 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@967 -- # kill 1290045 00:29:40.990 Received shutdown signal, test time was about 1.000000 seconds 00:29:40.990 00:29:40.990 Latency(us) 00:29:40.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.990 =================================================================================================================== 00:29:40.990 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@972 -- # wait 1290045 00:29:40.990 18:54:57 keyring_file -- keyring/file.sh@21 -- # killprocess 1288296 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1288296 ']' 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1288296 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.990 18:54:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1288296 00:29:41.249 18:54:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:41.249 18:54:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:41.249 18:54:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1288296' 00:29:41.249 killing process with pid 1288296 00:29:41.249 18:54:57 keyring_file -- common/autotest_common.sh@967 -- # kill 1288296 00:29:41.249 [2024-07-15 18:54:57.723761] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:41.249 18:54:57 keyring_file -- common/autotest_common.sh@972 -- # wait 1288296 00:29:41.509 00:29:41.509 real 0m11.929s 00:29:41.509 user 0m28.244s 00:29:41.509 sys 0m2.724s 00:29:41.509 18:54:58 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:41.509 18:54:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:41.509 ************************************ 00:29:41.509 END TEST keyring_file 00:29:41.509 ************************************ 00:29:41.509 18:54:58 -- common/autotest_common.sh@1142 -- # return 0 00:29:41.509 18:54:58 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:41.509 18:54:58 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:41.509 18:54:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:41.509 18:54:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.509 18:54:58 -- common/autotest_common.sh@10 -- # set +x 00:29:41.509 ************************************ 00:29:41.509 START TEST keyring_linux 00:29:41.509 ************************************ 00:29:41.509 18:54:58 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:41.509 * Looking for test storage... 00:29:41.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:41.509 18:54:58 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:41.509 18:54:58 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.509 18:54:58 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.509 18:54:58 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.509 18:54:58 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.509 18:54:58 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.509 18:54:58 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.509 18:54:58 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.509 18:54:58 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:41.509 18:54:58 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.509 18:54:58 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.509 18:54:58 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:41.509 18:54:58 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:41.509 18:54:58 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:41.509 18:54:58 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:41.509 18:54:58 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:41.509 18:54:58 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:41.509 18:54:58 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:41.509 18:54:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:41.769 /tmp/:spdk-test:key0 00:29:41.769 18:54:58 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:41.769 18:54:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:41.769 18:54:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:41.769 /tmp/:spdk-test:key1 00:29:41.769 18:54:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1290428 00:29:41.769 18:54:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1290428 00:29:41.769 18:54:58 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:41.769 18:54:58 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1290428 ']' 00:29:41.769 18:54:58 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.769 18:54:58 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:41.769 18:54:58 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.769 18:54:58 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:41.769 18:54:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:41.769 [2024-07-15 18:54:58.357766] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:29:41.769 [2024-07-15 18:54:58.357818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290428 ] 00:29:41.769 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.769 [2024-07-15 18:54:58.410288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.028 [2024-07-15 18:54:58.489739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.597 18:54:59 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.597 18:54:59 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:42.597 18:54:59 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:42.597 18:54:59 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.597 18:54:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:42.597 [2024-07-15 18:54:59.156573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.597 null0 00:29:42.597 [2024-07-15 18:54:59.188620] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:42.597 [2024-07-15 18:54:59.188943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:42.597 18:54:59 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.597 18:54:59 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:42.597 822156727 00:29:42.597 18:54:59 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:42.597 883067578 00:29:42.597 18:54:59 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1290610 00:29:42.597 18:54:59 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1290610 /var/tmp/bperf.sock 00:29:42.598 18:54:59 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:42.598 18:54:59 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1290610 ']' 00:29:42.598 18:54:59 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.598 18:54:59 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.598 18:54:59 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.598 18:54:59 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.598 18:54:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:42.598 [2024-07-15 18:54:59.259398] Starting SPDK v24.09-pre git sha1 abb6b4c21 / DPDK 24.03.0 initialization... 00:29:42.598 [2024-07-15 18:54:59.259439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290610 ] 00:29:42.598 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.859 [2024-07-15 18:54:59.311481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.859 [2024-07-15 18:54:59.384430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.532 18:55:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.532 18:55:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:43.532 18:55:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:43.532 18:55:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:43.532 18:55:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:43.532 18:55:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:43.792 18:55:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:43.792 18:55:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:44.050 [2024-07-15 18:55:00.623887] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:44.050 nvme0n1 00:29:44.050 18:55:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:44.050 18:55:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:44.050 18:55:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:44.050 18:55:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:44.050 18:55:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.050 18:55:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:44.307 18:55:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:44.307 18:55:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:44.307 18:55:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:44.307 18:55:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:44.307 18:55:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:44.307 18:55:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:44.307 18:55:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:44.566 18:55:01 keyring_linux -- keyring/linux.sh@25 -- # sn=822156727 00:29:44.566 18:55:01 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:44.566 18:55:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:44.566 18:55:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 822156727 == \8\2\2\1\5\6\7\2\7 ]] 00:29:44.566 18:55:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 822156727 00:29:44.566 18:55:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:44.566 18:55:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:44.566 Running I/O for 1 seconds... 00:29:45.502 00:29:45.502 Latency(us) 00:29:45.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.502 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:45.502 nvme0n1 : 1.01 14946.56 58.39 0.00 0.00 8530.40 2607.19 11853.47 00:29:45.502 =================================================================================================================== 00:29:45.502 Total : 14946.56 58.39 0.00 0.00 8530.40 2607.19 11853.47 00:29:45.502 0 00:29:45.502 18:55:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:45.502 18:55:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:45.761 18:55:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:45.761 18:55:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:45.761 18:55:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:45.761 18:55:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:45.761 18:55:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:45.761 18:55:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:46.020 18:55:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:46.020 18:55:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:46.020 18:55:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:46.020 18:55:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.020 18:55:02 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:46.020 18:55:02 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.020 18:55:02 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:46.020 18:55:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.020 18:55:02 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:46.020 18:55:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:46.020 18:55:02 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.020 18:55:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:46.280 [2024-07-15 18:55:02.730523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:46.280 [2024-07-15 18:55:02.730810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e4fd0 (107): Transport endpoint is not connected 00:29:46.280 [2024-07-15 18:55:02.731804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e4fd0 (9): Bad file descriptor 00:29:46.280 [2024-07-15 18:55:02.732805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:46.280 [2024-07-15 18:55:02.732814] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:46.280 [2024-07-15 18:55:02.732821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:46.280 request: 00:29:46.280 { 00:29:46.280 "name": "nvme0", 00:29:46.280 "trtype": "tcp", 00:29:46.280 "traddr": "127.0.0.1", 00:29:46.280 "adrfam": "ipv4", 00:29:46.280 "trsvcid": "4420", 00:29:46.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.280 "prchk_reftag": false, 00:29:46.280 "prchk_guard": false, 00:29:46.280 "hdgst": false, 00:29:46.280 "ddgst": false, 00:29:46.280 "psk": ":spdk-test:key1", 00:29:46.280 "method": "bdev_nvme_attach_controller", 00:29:46.280 "req_id": 1 00:29:46.280 } 00:29:46.280 Got JSON-RPC error response 00:29:46.280 response: 00:29:46.280 { 00:29:46.280 "code": -5, 00:29:46.280 "message": "Input/output error" 00:29:46.280 } 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@33 -- # sn=822156727 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 822156727 00:29:46.280 1 links removed 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@33 -- # sn=883067578 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 883067578 00:29:46.280 1 links removed 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1290610 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1290610 ']' 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1290610 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290610 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290610' 00:29:46.280 killing process with pid 1290610 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@967 -- # kill 1290610 00:29:46.280 Received shutdown signal, test time was about 1.000000 seconds 00:29:46.280 00:29:46.280 Latency(us) 00:29:46.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.280 =================================================================================================================== 00:29:46.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@972 -- # wait 1290610 00:29:46.280 18:55:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1290428 00:29:46.280 18:55:02 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1290428 ']' 00:29:46.539 18:55:02 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1290428 00:29:46.539 18:55:02 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:46.539 18:55:02 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:46.539 18:55:02 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290428 00:29:46.539 18:55:03 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:46.539 18:55:03 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:46.539 18:55:03 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290428' 00:29:46.539 killing process with pid 1290428 00:29:46.539 18:55:03 keyring_linux -- common/autotest_common.sh@967 -- # kill 1290428 00:29:46.539 18:55:03 keyring_linux -- common/autotest_common.sh@972 -- # wait 1290428 00:29:46.802 00:29:46.802 real 0m5.235s 00:29:46.802 user 0m9.237s 00:29:46.802 sys 0m1.481s 00:29:46.802 18:55:03 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:46.802 18:55:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:46.802 ************************************ 00:29:46.802 END TEST keyring_linux 00:29:46.802 ************************************ 00:29:46.802 18:55:03 -- common/autotest_common.sh@1142 -- # return 0 00:29:46.802 18:55:03 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:46.802 18:55:03 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:46.802 18:55:03 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:46.802 18:55:03 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:46.802 18:55:03 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:46.802 18:55:03 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:46.802 18:55:03 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:46.802 18:55:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:46.802 18:55:03 -- common/autotest_common.sh@10 -- # set +x 00:29:46.802 18:55:03 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:46.802 18:55:03 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:46.802 18:55:03 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:46.802 18:55:03 -- common/autotest_common.sh@10 -- # set +x 00:29:50.993 INFO: APP EXITING 00:29:50.993 INFO: killing all VMs 00:29:50.993 INFO: killing vhost app 00:29:50.993 INFO: EXIT DONE 00:29:53.530 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:29:53.530 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:29:53.530 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:29:53.530 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:29:53.530 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:29:53.788 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:29:56.321 Cleaning 00:29:56.321 Removing: /var/run/dpdk/spdk0/config 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:56.321 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:56.321 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:56.321 Removing: /var/run/dpdk/spdk1/config 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:56.321 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:56.321 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:56.321 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:56.321 Removing: /var/run/dpdk/spdk2/config 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:56.321 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:56.321 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:56.321 Removing: /var/run/dpdk/spdk3/config 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:56.321 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:56.321 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:56.321 Removing: /var/run/dpdk/spdk4/config 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:56.321 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:56.321 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:56.321 Removing: /dev/shm/bdev_svc_trace.1 00:29:56.321 Removing: /dev/shm/nvmf_trace.0 00:29:56.321 Removing: /dev/shm/spdk_tgt_trace.pid907186 00:29:56.321 Removing: /var/run/dpdk/spdk0 00:29:56.321 Removing: /var/run/dpdk/spdk1 00:29:56.321 Removing: /var/run/dpdk/spdk2 00:29:56.321 Removing: /var/run/dpdk/spdk3 00:29:56.321 Removing: /var/run/dpdk/spdk4 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1000486 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1006494 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1013169 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1013242 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1013975 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1014858 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1015773 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1016304 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1016462 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1016693 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1016706 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1016713 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1017625 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1018546 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1019460 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1019926 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1019934 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1020256 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1021423 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1022607 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1030738 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1031184 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1035306 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1041068 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1043658 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1054557 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1063449 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1065100 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1066000 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1082584 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1086350 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1111332 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1115602 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1117217 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1119159 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1119320 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1119510 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1119744 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1120251 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1122104 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1123090 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1123646 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1125899 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1126544 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1127179 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1131269 00:29:56.321 Removing: /var/run/dpdk/spdk_pid1141641 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1145678 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1151705 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1153088 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1154516 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1158802 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1162830 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1170278 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1170376 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1174900 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1175128 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1175352 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1175808 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1175813 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1180289 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1180866 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1185190 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1188207 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1193857 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1199184 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1207553 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1214715 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1214722 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1232576 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1233254 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1233944 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1234675 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1235524 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1236614 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1237301 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1237889 00:29:56.580 Removing: /var/run/dpdk/spdk_pid1242115 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1242412 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1248335 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1248606 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1250825 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1258327 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1258332 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1263353 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1265316 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1267274 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1268328 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1270305 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1271367 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1280384 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1280846 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1281495 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1283754 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1284246 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1284714 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1288296 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1288530 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1290045 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1290428 00:29:56.581 Removing: /var/run/dpdk/spdk_pid1290610 00:29:56.581 Removing: /var/run/dpdk/spdk_pid905030 00:29:56.581 Removing: /var/run/dpdk/spdk_pid906115 00:29:56.581 Removing: /var/run/dpdk/spdk_pid907186 00:29:56.581 Removing: /var/run/dpdk/spdk_pid907828 00:29:56.581 Removing: /var/run/dpdk/spdk_pid908772 00:29:56.581 Removing: /var/run/dpdk/spdk_pid909008 00:29:56.581 Removing: /var/run/dpdk/spdk_pid909985 00:29:56.581 Removing: /var/run/dpdk/spdk_pid910218 00:29:56.581 Removing: /var/run/dpdk/spdk_pid910390 00:29:56.581 Removing: /var/run/dpdk/spdk_pid911958 00:29:56.581 Removing: /var/run/dpdk/spdk_pid913236 00:29:56.581 Removing: /var/run/dpdk/spdk_pid913606 00:29:56.581 Removing: /var/run/dpdk/spdk_pid913901 00:29:56.838 Removing: /var/run/dpdk/spdk_pid914197 00:29:56.838 Removing: /var/run/dpdk/spdk_pid914493 00:29:56.838 Removing: /var/run/dpdk/spdk_pid914830 00:29:56.838 Removing: /var/run/dpdk/spdk_pid915120 00:29:56.838 Removing: /var/run/dpdk/spdk_pid915394 00:29:56.839 Removing: /var/run/dpdk/spdk_pid916211 00:29:56.839 Removing: /var/run/dpdk/spdk_pid919521 00:29:56.839 Removing: /var/run/dpdk/spdk_pid919782 00:29:56.839 Removing: /var/run/dpdk/spdk_pid920046 00:29:56.839 Removing: /var/run/dpdk/spdk_pid920276 00:29:56.839 Removing: /var/run/dpdk/spdk_pid920764 00:29:56.839 Removing: /var/run/dpdk/spdk_pid920783 00:29:56.839 Removing: /var/run/dpdk/spdk_pid921270 00:29:56.839 Removing: /var/run/dpdk/spdk_pid921488 00:29:56.839 Removing: /var/run/dpdk/spdk_pid921764 00:29:56.839 Removing: /var/run/dpdk/spdk_pid921777 00:29:56.839 Removing: /var/run/dpdk/spdk_pid922041 00:29:56.839 Removing: /var/run/dpdk/spdk_pid922267 00:29:56.839 Removing: /var/run/dpdk/spdk_pid922641 00:29:56.839 Removing: /var/run/dpdk/spdk_pid922873 00:29:56.839 Removing: /var/run/dpdk/spdk_pid923159 00:29:56.839 Removing: /var/run/dpdk/spdk_pid923434 00:29:56.839 Removing: /var/run/dpdk/spdk_pid923666 00:29:56.839 Removing: /var/run/dpdk/spdk_pid923729 00:29:56.839 Removing: /var/run/dpdk/spdk_pid923986 00:29:56.839 Removing: /var/run/dpdk/spdk_pid924231 00:29:56.839 Removing: /var/run/dpdk/spdk_pid924485 00:29:56.839 Removing: /var/run/dpdk/spdk_pid924733 00:29:56.839 Removing: /var/run/dpdk/spdk_pid924982 00:29:56.839 Removing: /var/run/dpdk/spdk_pid925233 00:29:56.839 Removing: /var/run/dpdk/spdk_pid925482 00:29:56.839 Removing: /var/run/dpdk/spdk_pid925733 00:29:56.839 Removing: /var/run/dpdk/spdk_pid925984 00:29:56.839 Removing: /var/run/dpdk/spdk_pid926231 00:29:56.839 Removing: /var/run/dpdk/spdk_pid926485 00:29:56.839 Removing: /var/run/dpdk/spdk_pid926732 00:29:56.839 Removing: /var/run/dpdk/spdk_pid926977 00:29:56.839 Removing: /var/run/dpdk/spdk_pid927232 00:29:56.839 Removing: /var/run/dpdk/spdk_pid927482 00:29:56.839 Removing: /var/run/dpdk/spdk_pid927734 00:29:56.839 Removing: /var/run/dpdk/spdk_pid927993 00:29:56.839 Removing: /var/run/dpdk/spdk_pid928261 00:29:56.839 Removing: /var/run/dpdk/spdk_pid928524 00:29:56.839 Removing: /var/run/dpdk/spdk_pid928799 00:29:56.839 Removing: /var/run/dpdk/spdk_pid929025 00:29:56.839 Removing: /var/run/dpdk/spdk_pid929325 00:29:56.839 Removing: /var/run/dpdk/spdk_pid932970 00:29:56.839 Removing: /var/run/dpdk/spdk_pid976452 00:29:56.839 Removing: /var/run/dpdk/spdk_pid980627 00:29:56.839 Removing: /var/run/dpdk/spdk_pid990633 00:29:56.839 Removing: /var/run/dpdk/spdk_pid995882 00:29:56.839 Removing: /var/run/dpdk/spdk_pid999802 00:29:56.839 Clean 00:29:57.097 18:55:13 -- common/autotest_common.sh@1451 -- # return 0 00:29:57.097 18:55:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:57.097 18:55:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.097 18:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.097 18:55:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:57.097 18:55:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.097 18:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.097 18:55:13 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:57.097 18:55:13 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:57.097 18:55:13 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:57.097 18:55:13 -- spdk/autotest.sh@391 -- # hash lcov 00:29:57.097 18:55:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:57.097 18:55:13 -- spdk/autotest.sh@393 -- # hostname 00:29:57.097 18:55:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:57.097 geninfo: WARNING: invalid characters removed from testname! 00:30:19.036 18:55:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:19.661 18:55:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:21.565 18:55:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:23.038 18:55:39 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:24.941 18:55:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:26.848 18:55:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:28.754 18:55:45 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:28.754 18:55:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.754 18:55:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:28.754 18:55:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.754 18:55:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.754 18:55:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.754 18:55:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.754 18:55:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.754 18:55:45 -- paths/export.sh@5 -- $ export PATH 00:30:28.754 18:55:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.754 18:55:45 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:28.754 18:55:45 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:28.754 18:55:45 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721062545.XXXXXX 00:30:28.754 18:55:45 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721062545.FTFqFn 00:30:28.754 18:55:45 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:28.754 18:55:45 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:28.754 18:55:45 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:28.754 18:55:45 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:28.754 18:55:45 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:28.754 18:55:45 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:28.754 18:55:45 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:28.754 18:55:45 -- common/autotest_common.sh@10 -- $ set +x 00:30:28.754 18:55:45 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:28.754 18:55:45 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:28.754 18:55:45 -- pm/common@17 -- $ local monitor 00:30:28.754 18:55:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.754 18:55:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.754 18:55:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.754 18:55:45 -- pm/common@21 -- $ date +%s 00:30:28.754 18:55:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.754 18:55:45 -- pm/common@21 -- $ date +%s 00:30:28.754 18:55:45 -- pm/common@25 -- $ sleep 1 00:30:28.754 18:55:45 -- pm/common@21 -- $ date +%s 00:30:28.754 18:55:45 -- pm/common@21 -- $ date +%s 00:30:28.754 18:55:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062545 00:30:28.755 18:55:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062545 00:30:28.755 18:55:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062545 00:30:28.755 18:55:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721062545 00:30:28.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062545_collect-vmstat.pm.log 00:30:28.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062545_collect-cpu-load.pm.log 00:30:28.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062545_collect-cpu-temp.pm.log 00:30:28.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721062545_collect-bmc-pm.bmc.pm.log 00:30:29.693 18:55:46 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:29.693 18:55:46 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:29.693 18:55:46 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:29.693 18:55:46 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:29.693 18:55:46 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:29.693 18:55:46 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:29.693 18:55:46 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:29.693 18:55:46 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:29.693 18:55:46 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:29.693 18:55:46 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:29.693 18:55:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:29.693 18:55:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:29.693 18:55:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:29.693 18:55:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.693 18:55:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:29.693 18:55:46 -- pm/common@44 -- $ pid=1300559 00:30:29.693 18:55:46 -- pm/common@50 -- $ kill -TERM 1300559 00:30:29.693 18:55:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.693 18:55:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:29.693 18:55:46 -- pm/common@44 -- $ pid=1300561 00:30:29.693 18:55:46 -- pm/common@50 -- $ kill -TERM 1300561 00:30:29.693 18:55:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.693 18:55:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:29.693 18:55:46 -- pm/common@44 -- $ pid=1300562 00:30:29.693 18:55:46 -- pm/common@50 -- $ kill -TERM 1300562 00:30:29.694 18:55:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.694 18:55:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:29.694 18:55:46 -- pm/common@44 -- $ pid=1300587 00:30:29.694 18:55:46 -- pm/common@50 -- $ sudo -E kill -TERM 1300587 00:30:29.694 + [[ -n 801781 ]] 00:30:29.694 + sudo kill 801781 00:30:29.704 [Pipeline] } 00:30:29.722 [Pipeline] // stage 00:30:29.727 [Pipeline] } 00:30:29.744 [Pipeline] // timeout 00:30:29.749 [Pipeline] } 00:30:29.767 [Pipeline] // catchError 00:30:29.773 [Pipeline] } 00:30:29.792 [Pipeline] // wrap 00:30:29.798 [Pipeline] } 00:30:29.814 [Pipeline] // catchError 00:30:29.851 [Pipeline] stage 00:30:29.854 [Pipeline] { (Epilogue) 00:30:29.873 [Pipeline] catchError 00:30:29.875 [Pipeline] { 00:30:29.895 [Pipeline] echo 00:30:29.897 Cleanup processes 00:30:29.904 [Pipeline] sh 00:30:30.192 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:30.192 1300676 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:30.192 1300959 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:30.207 [Pipeline] sh 00:30:30.492 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:30.492 ++ grep -v 'sudo pgrep' 00:30:30.492 ++ awk '{print $1}' 00:30:30.492 + sudo kill -9 1300676 00:30:30.504 [Pipeline] sh 00:30:30.785 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:40.777 [Pipeline] sh 00:30:41.057 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:41.057 Artifacts sizes are good 00:30:41.074 [Pipeline] archiveArtifacts 00:30:41.082 Archiving artifacts 00:30:41.232 [Pipeline] sh 00:30:41.518 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:41.535 [Pipeline] cleanWs 00:30:41.591 [WS-CLEANUP] Deleting project workspace... 00:30:41.591 [WS-CLEANUP] Deferred wipeout is used... 00:30:41.603 [WS-CLEANUP] done 00:30:41.605 [Pipeline] } 00:30:41.624 [Pipeline] // catchError 00:30:41.637 [Pipeline] sh 00:30:41.920 + logger -p user.info -t JENKINS-CI 00:30:41.932 [Pipeline] } 00:30:41.951 [Pipeline] // stage 00:30:41.957 [Pipeline] } 00:30:41.975 [Pipeline] // node 00:30:41.982 [Pipeline] End of Pipeline 00:30:42.019 Finished: SUCCESS